Enabling Automation Podcast: S4 E4
We’re excited to bring you the fourth season of our podcast series, Enabling Automation. This monthly podcast series brings together industry leaders from across ATS Corporation to discuss the latest industry trends, new innovations and more!
In the fourth episode of season 4, we welcome host Sarita Dankner who is joined by Gord Raman and Christopher Green to discuss what leaders need to know about AI and accountability.
What we discuss:
- What is the difference between governance and accountability in the context of an AI system?
- Why is accountability, governance and ethical issues top priority when it comes to AI?
- What roles does leadership play in setting the tone for responsible innovation and what does ethical culture mean in an AI context?
Host: Sarita Dankner, ATS Corporation
Sarita is the associate general Counsel and Corporate secretary at ATS Corporation and has over 20 years of legal and compliance experience across public and purpose driven organizations, leading legal strategy, commercial governance and risk programs. She has a focus on enabling responsible growth that aligns innovation and technology with governance and ethics.
Guests: Gord Raman Chief Legal Officer, ATS Corporation and Christopher Green Associate General Counsel, ATS Corporation
——Full Transcript of Enabling Automation: S4, E4—–
SD: Welcome to Enabling Automation, where we bring experts across the ATS group of companies to discuss topics relevant to those using automation in their businesses. My name is Sarita Dankner. I’m the Associate General Counsel and Corporate Secretary at ATS Corporation, a global leader in automation solutions. I have over 20 years of legal and compliance experience across public and purpose driven organizations, leading legal strategy, commercial governance and risk programs, with a focus on enabling responsible growth that aligns opportunity, innovation and technology with governance and ethics. Today, we’re digging into one of the most important and complex issues businesses are facing right now, as it relates to AI, accountability and more specifically, the intersection of artificial intelligence, legal and reputational risk, and responsible governance. My guests today are Gord Raman, Chief Legal Officer of ATS Corporation, and Christopher Green, Associate General Counsel at ATS. Gord and Christopher, thanks for joining us.
GR: Thanks Sarita, great to be here.
CG: Thank you, Sarita.
SD: Before we jump in, I thought we’d start with a few basic concepts to anchor the discussion. Gord, let’s start with you. The subject of today’s podcast refers to accountability and governance. How would you distinguish between governance and accountability in the context of AI systems?
GR: Well, maybe to take a step back, let’s talk about, let’s start with governance. In my mind, governance is always about oversight structures. Are people looking at things in a holistic way, and are they setting up a process to make sure things are being done properly? Accountability is much more individualistic, so much more about responsibility. And so to me, the governance structure really is around a process to make sure the right things are being done. And accountability is each and every one of us actually doing the right things.
SD: Thanks, Gord. I think that distinction is really helpful, especially when we’re thinking about AI from a foundational perspective. And Christopher, today we’re talking about AI in automation. How would you define that? Can you explain what that actually means in real world business terms?
CG: Sure. So when we say AI in automation, we’re talking about using artificial intelligence to make automated systems smarter and more adaptive. Think of an automation production line. Traditionally it follows a certain sequence of steps. And with AI those systems can actually now detect variations. They can optimize their own performance, predict maintenance needs, or even recommend process changes in real time. In business terms, that means efficiency, consistency, and better decision making. But and that’s why we’re also talking about this today actually raises some interesting new questions about accountability, bias, and control.
SD: I think that’s a great way to ground this discussion, because I think it sounds like even as we’re still learning, AI is already quite deeply embedded in how companies operate. I’d ask both of you, why do you think accountability, governance and ethical issues in AI are a top priority for businesses right now? What’s at stake?
GR: I’ll- maybe I’ll start in that and Christopher can jump in. I think we’re at the beginning stages of how AI is being understood to be used by companies. Everybody is trying to figure it out, and when you’re trying to figure things out for the first time, that’s also the right time to set make the rules of the road, if you will. How do we use AI? What are the rules under which we should operate? It’s a bit like opening up any tool that you buy and reading the instruction manual first, as opposed to just jumping in and using the tool. If you just jump in and use a tool, you can have unintended consequences. And things can go terribly wrong. And the idea of focusing on governance and accountability and those types of things at the outset of the use of AI is just to make sure that we all have a set of rules that we follow, so that we end up using things responsibly and we don’t have unintended consequences, I think.
CG: So me and Gord are very much in agreement, I think I think the challenge quite openly is, is because AI has exponential growth with regard to computing power, one interesting statistic I read the other day, is that the current best model available, for example for ChatGPT, which is 4.0 in 2 years time, we’re going to have a system that has a thousand times the computing power that the current best system in the world has. So the stakes are high because AI impacts not just efficiency, but also trust. And it’s trust of our customers. It’s our employees, but also regulators because AI systems and as great as they are, they can make mistakes. So that can lead to faulty products. You can have I mean, it’s been in the news, biased hiring decisions. You can have reputational damage. So without that important element of proper governance, we as a company and many companies like us can run into legal risk of exposure. And again, worst case, we do something and it’s in the public eye and we lose public confidence. So when we go into discussions with our with our business and technical people, we have the discussion about is governance opposed to innovation. And in my mind, responsible governance isn’t about slowing innovation. It’s about making sure the innovation is sustainable and defensible. So that’s why to my mind, it’s not contradictory. It’s actually complementary, those two elements.
SD: I absolutely agree with both of you and I, you both mentioned trust and reputation as being at stake here in terms of how we approach this and doing it right. And I think we’d agree that trust and reputation are both very hard to rebuild once they’re lost. I think it was Warren Buffett that said that it takes, he has a quote, 20 years to build a reputation and five minutes to lose it. So I feel like we’re being faced with some really high stakes here. Let’s go back to governance for a second. The systems and decision making structures that guide how organizations adopt and oversee AI. Gord, how involved should boards be in shaping AI strategy and risk governance? Are we seeing these expectations evolve?
GR: Again, I think we’re at a at such a beginning point on the use of AI, I would expect that all boards of companies would be involved to a large extent at the outset. The second thing I would say is for companies, it really depends on how they’re using AI. And I think the best way to think about this is think of it as a spectrum. On the one end of the spectrum, you have a company like OpenAI, like its sole reason for existing is AI. That’s its business. And so it’s impossible for a board not to be in the governance of AI every single day thinking about it, because that’s the existence of the company. It depends on it. Versus on the other end of the spectrum. Think of many other companies where AI for them may really just be using some functionality. For example, as part of their Microsoft Suite. And AI is being used to really help employees do things better, to make them a bit more efficient, to make their lives a bit easier. AI in that instance doesn’t go to the existential nature of the company itself, so you can think in that scenario, a board doesn’t need to be as involved as in the OpenAI example. So that gives you a flavor for how to think about how involved a board should be. Having said that, I come back to the first point, which is today AI is so nascent, if you will, that it’s kind of incumbent on all boards to start asking the right questions. A board should even understand what is the level of AI use? Where in the spectrum do we fall? And it’s only by asking those questions that they will then figure out the level of oversight, that the level of involvement that they should actually be having.
SD: What I’m hearing you say is that it’s a company’s practical reality and function and really understanding what that means for it. Is just as important as the actual structure and governance that the board needs for oversight. What governance structures are we seeing in the market? Things like maybe AI committees or internal audit, ethics reviews. What are we seeing and what’s working? What’s not working?
GR: So again, I think, if I go back to my example of looking at this on a spectrum, the closer you are to the OpenAI type of company where AI is fundamental to its existence, you will start to see boards having things like AI committees, just like more spots today. For example, if you’re in a heavily industrial company, you might have a risk and safety committee because that’s helping the board discharge the thing One of the things that’s most important to the company, which is how do we operate in a safe environment in an industrial setting? Similarly, it would make sense for a board to have a potentially an AI committee. The more fundamental AI is to the business. On the other end of the spectrum, where you’re where companies are just really using AI to allow their employees to be more efficient or more effective, the governance structures will probably be more policy oriented. The boards will want to know that appropriate policies are in place to help make sure that employees are using AI appropriately, and then they’ll want some sort of reporting structure to know that those policies are being complied with. And I think all of this is at a is I keep coming back to this. It’s still at a fairly early stage on how AI is being rolled out in companies. So I think how effective some of these governance structures are will remain to be seen, I think, in the coming months.
SD: And Christopher, just building on that. So we’ve got obviously board oversight and governance. What about leadership? What role does leadership play in setting the tone for responsible innovation and what does ethical culture mean in an AI context?
CG: That’s a great question. I mean, I know that it comes across a little bit like cliché, but I think the tone from the top is so important because I think the leadership sets the tone, because if executives make it clear that that the speed that comes with AI do that can come at the expense of ethics, then teams in general will approach AI projects differently. And with ethics and culture, that means that you build systems with transparency, fairness and accountability at their core. That’s what I found fascinating. Just as this entire AI journey started, that a lot of the people who were who were building their AI models, they couldn’t really quite explain what was going on in that black box. And sometimes you were getting good responses and some sometimes you were getting very, very questionable responses. And so, so when you when you take the ethical approach, you question how algorithms are trained, you look at the data and also have to sometimes acknowledge that the data that was collected actually has biases of it’s own. And unfortunately AI is just exceedingly good at identifying patterns. So you also need to bear in mind who’s impacted by these decisions. And are these decisions truly ethical. And then also you also have to ensure how these decisions are explained. And we need to think about ethics and transparency and fairness as core values of AI. And one very, very interesting example or article I read the other day in The Economist that a lot of the companies who are first and foremost operating in AI, even those teams, have a little bit ethical concerns. But at the same time, they feel that they have to be the first because the first out of the gate wins. But at the same time, you can read in the newspapers all major players that are currently developing large language models or other GPTs, they are concerned about the ethical elements to it, but at the same time they feel compelled to move forward faster and faster.
SD: So it sounds like it’s, it’s a constant struggle or balance between can we, should we and then how should we?
CG: Exactly. I think that nails it the way you just said it, Sarita, because I think that’s the super difficult challenge. We have this amazing tool. But at the same time, how can we be using it and how should we be using it? And I think that’s what everybody, regardless of the industry you’re working in or where you are on this planet, everybody’s trying to come to terms with exactly that problem at the moment.
SD: Yeah. Very interesting, very true. And Christopher, you have global responsibilities at ATS, but you’re based in Europe. From your experience, Does AI governance look different in Europe versus North America? How do these regional regulatory approaches shift accountability and leadership responsibilities if that’s the case.
CG: Yeah. And I think that that raises like a super interesting point. Europe takes a different approach to this amazing tool, AI. So in Europe there’s a lot of regulatory frameworks. So we have the EU AI act, which places a heavy emphasis on risk classification and human oversight. So it’s a very proactive but rules based approach. And again this is me based in Europe, looking over the pond at North America. And for me, it often seems that the North American approach is more principle based, more focused on innovation and the results coming out of it. But what I’m what I’m also sensing and seeing is that there’s an increasingly an added sector specific guidance. So depending on which industry you’re working in. What I found fascinating because we are a global company, that means that we also have the additional challenge that we have to be flexible with our approach at one side meeting the stringent European requirements, but at the same time also adapting to the North American expectations as well. So that just for us as a company adds a very, very interesting twist to it as well to do well both in the North American market but also in the European market, and also respecting and being mindful of the different approaches. Not saying one is better than the other, they’re just different, but also super interesting to see just different countries, different larger multinationals, basically come to terms and how to deal with that challenge as well.
SD: Absolutely. If we just turn back to legal risk, we’ve referenced it briefly as lawyers, three lawyers on a podcast today. We’re trained to ask what could go wrong? With AI I feel that list is growing quickly. So, Gord, maybe we can talk about liability a little bit. When AI is involved in making or influencing a decision, what from a legal perspective can go wrong?
GR: When we think about AI, I think it’s important to understand that today’s AI we’re really talking about these generative models and to understand what that really means. I think it’s important to understand the difference between probabilistic and deterministic. So AI models that are being rolled out now, they’re all kind of probabilistic, meaning that think of it. If two different people put in similar prompts, you’re not guaranteed to get the same answer. The AI is generating a response based on everything that it’s learned. And so the response is probabilistic. It’s within some sort of range of probabilities. It’s not a precise answer. So I think it’s important to understand
that that’s how AI works. So if you’re operating in a zone where you need to have precision in something that you’re dealing with, but then you use an AI model to give you something that’s not precise, but, you know, within a range of probabilities. The legal risk, I think, starts to increase. I’d like to hear Christopher’s perspective on it as well. But this is the way I think about things in terms of if you’re trying to do something that requires a precise answer, and then you take what AI gives you, which is more probabilistic, and you just apply it, you’ve likely stepped into a zone of risk versus taking what AI has given you, thinking about it, and then formulating it into a more precise answer.
SD: Right. So using it to support but not to drive what’s right. Correct. Christopher, what do you think?
CG: Yeah I agree. When I think about how I’ve applied AI in the legal profession, I’ve seen it on the one side being exceptionally good, like surprisingly good. And then exactly to Gord’s point when they try to, like, share that with colleagues. I put in the exact same prompt. A very different answer came up. And unfortunately, in that moment, it’s like the joke of me doing something when I’m alone, me doing something when I’m being watched and then unfortunately, the second time I actually showed it to a colleague, the result was nowhere near as impressive as I wanted it to be. Even when and I call it AI, has one of its better days, I still would not feel that I can just copy paste the result or the answer back. I always think, and maybe it’s just us as lawyers and the legal profession and the work we work on and the high stakes which are often involved. But I really wouldn’t want to see. And I just, quite frankly, don’t believe that AI is there yet. And I’ll emphasize the yet. But I would always want to have human oversight, because we always have to bear in mind, AI per se is not an intelligence, it’s just ferociously good at pattern recognition. And if those patterns are not the right patterns or what we perceive not to be the right patterns, we will get the wrong results. And that’s why it’s so important just to have that human oversight. And what I enjoy seeing is, is, for example, I heard about a great example the other day of AI being deployed in a in a ER room where it had 80% right analysis of patients coming in, but also 20% ferociously wrong. And what worked best is when you actually had a medical professional team up with AI, because also the humans had mistakes as well. But what really, really work well together is that you had the human that explain the situation to AI, AI would do its work and would come back with recommendations and in certain instances actually prompting the human to get better results. So I enjoy those operations fields where like in the medical profession, in the legal profession, and certain other profession. I think the combination of the amazing computing power of AI, combined with the professional oversight that’s medical, legal or a different profession. I think that at the moment, to me is something how I see using AI. That being said, there’s amazing things that AI can do today. A lot of other smaller tasks can certainly be handed over to AI, but certainly not the high risk, high stake elements that that we three often work in.
GR: Can I just pick up on one point just to again provide a framework on how to maybe think about this. Christopher’s example about, the use of AI in the medical field is interesting. The other way I think to think about this is if you’re using AI and what you’re doing, the output is helping you, let’s say save money. The risk profile is different because you can you can address that in a contract. Either it helped you save money or it didn’t. And if it didn’t help you save money, and that’s really the service that you were selling. You can kind of compensate for that through limitational liabilities and contracts. But it’s really just about arguing about money. The more you go on the on the on the spectrum towards using AI for things that aren’t about money, that’s when you really start to because it’s really hard to gauge what the risk is, what happens if it goes wrong. So the medical field is a great example. You know, the other example that people use is in human resources. If you use AI to start making decisions about employees, about hiring or firing or anything to do with employees, it’s hard to quantify what that what that means and the risk of, the risk of getting something wrong is just that much greater. So I think the distinction between what you’re using AI for, is it really about money, or is it really about something that’s a bit more human oriented? I think is a good distinction for people to keep in mind.
SD: Yes. Agreed. So what I’ve heard you say, and I do completely agree, is that deployed the right way with clear intention, AI can be very powerful, but also lots can go wrong. So I would ask you both, what can companies do to proactively manage the legal risk from AI driven systems? So we’ve got traditional risk frameworks in place. But this is a whole new playing field. So how do we need to evolve our thinking?
CG: So I think and again this is just for my own experience, what I’ve seen us do at ATS and work well at ATS is I’ve seen the biggest success when you actually have cross-functional teams come together. So that can that includes somebody from IT, somebody from the business, somebody from compliance, somebody from legal, somebody from operations. Because in this highly specialized, but at the same time, diversified world we live in, nobody has all the answers. And in particular when it comes to such an amazing, groundbreaking, powerful tool like AI. So, so I think you really, really need to get that cross-functional team together where everybody takes their specific knowledge and their perspective into, into, into consideration, because that’s where you really get start getting the full picture. We’ve seen in the past with, with, with other technologies that if you if you just leave it in one hand, you can easily go off in the wrong direction. So I think like don’t work in a silo, start knocking on people’s door, start interacting with people, start bouncing ideas off people. And really try go to the people who you probably wouldn’t have on your top list. When we think about these things and again, jokingly, also ourselves, I think when people think about launching a new AI tool, not everybody said, oh yes, bring in the lawyers because they’re really going to help us here. But at the same time, these legal considerations and requirements are so important to get right from the start, as we’ve just discussed those things among each other.
GR: I agree, I think that cross-functional aspect is really important. Let me take it one step below that to the individual aspect. I think the one way that every person can help manage legal risk is by doing two things. One, before they use an AI tool, just thinking to themselves or asking, what is this AI tool and how does it work? And number two, how am I using this AI tool? The first question about what is this AI tool? And how does it work? It’s really about understanding when you are asking it a question and putting a prompt into it. Do you have any idea of what it’s looking at or how it’s been trained, or what information is it accessing to give you an answer? And then the second thing is, once you get that information, what are you actually using it for? And what’s the consequence of failure? If you’re using the information to make a decision that’s very critical, you’re going to be much more careful about how you use that information and be more critical about understanding where it came from versus if you’re using it in generating a presentation that’s for internal purposes to one of your colleagues, and you happen to get it wrong. It’s not the end of the world because your colleagues are likely going to understand. But those two questions asking, where is the information coming from and how am I going to use it, I think is a good rule of thumb for anybody that’s just using AI in their organizations.
SD: I think what I’m hearing from both of you is that there’s clearly a level of fluency that especially, you know, legal teams need to develop in order to help manage the risk and support the business. We have to know what this means, what we’re talking about, what impacts or consequences it has in order to be appropriate advisors for the business or even lead sometimes, you know, governance efforts involved in these initiatives. I would ask both of you, how would you then describe in more detail the evolving role of legal in automation in AI and strategy? How do we move from being gatekeepers and just worrying about things going wrong to being strategic advisors for the business as this moves forward very, very quickly?
GR: To me, it’s exactly what you said, Sarita. It’s the level of fluency. And I think that level of fluency we get by using AI, like it’s kind of incumbent on all of us to try it, to use it, to understand how not only how we are using it or might use it, but how our colleagues are using it or might use it. And that level of fluency and how it’s being used will help us think about, okay, well, this could go wrong. Or hey, wait a minute, the organization might actually use it in this way and we could get all kinds of great benefits. So I think it does start to surface the risks and opportunities. But to go to your point, the only way to do that is to get in there, try it out, use it and figure it out.
CG: Very much agree with what Gord said. I mean, I think this is not just going back two years ago when AI started to appear. You saw certain companies had that knee jerk reaction, like, let’s turn that off. Like, like let’s, let’s prevent access from our employees. Like it was a USB port. Now, the thing is, what’s happening, is that people are still accessing this amazing tool. I was actually on a call the other day from a large consulting company which talks about enterprise wide risks and one of the top risks that they identified for this quarter, global risks is, is the usage of shadow, what they call shadow AI. And what is shadow AI that is actually people, not being permitted to use AI or AI tools are not being provided through the company, but then the people still use it in their private lives. So what’s happening is that they’re taking company information, company ideas, company strategy, putting that into what they call shadow AI, which is an unregulated AI tool. Getting results back. And now what you also have to bear in mind, and actually, Sam Altman, the CEO from ChatGPT, was actually in the news the other day saying that AI does not have legal privilege, which means that whatever you tell AI can be found. And there’s been multiple examples
where people have put something or asked AI to code something. That was a large Korean electronic companies that actually put sensitive, confidential information into AI. And somebody else found that because it was just so specific. And there’s just other things there as well where you cannot prevent it from happening. It’s going to happen. So the only thing you can do is, is create the path forward to make the use a sensible use. Saying that people aren’t allowed to use AI is not the right way to go. So you have to develop those clear governance policies. You have to cover a certain and Gord spoke about a couple of minutes ago, it’s data hygiene, making sure that your data source is good, crisp and clean. It’s testing that algorithm. It’s having that human oversight. You can’t just shut your eyes and hope that it’s not coming, because it is coming, and it’s coming fast and quickly. And when one of the sound bites I had in mind from, I think last year’s leadership conference that we had at ATS, is the statement we had was it’s never been this quick meaning AI, but at the same time it’s never going to be this slow again. And I think this is something we should all be bearing in mind. This is it is breathtaking the speed we’re currently traveling at. But it’s going to be so much faster tomorrow. And I think that’s where we just have to prepare ourselves and get this. Because what we don’t build today and the guidelines and the rail guards we don’t put in place today, we will be sorely missing them in two years time.
SD: So either get on the train or we get run over.
CG: Yea, exactly.
SD: I think it always helps to ground the conversation with real life examples. I’d like to ask each of you if you can share a success story where AI was implemented in a responsible, effective way or conversely, a cautionary tale where governance failed or risk wasn’t adequately managed. What can we learn from those cases, and how should that inform our governance models going forward?
CG: One example this was from our Orise company, a company which has amazing AI capabilities, and the project we were working for, it’s actually one of the largest European coffee producers in the world. And the task given to us from that, the coffee maker company, was they were having variations in the amount of water, the humidity in the coffee beans. And again, I hadn’t known that until then. But obviously you do not want to have your coffee to dry, doesn’t makes the taste go away. And you also don’t want to have it too wet, because that also then can lead to other unwanted quality reductions as well. So the task at hand was to no matter which coffee beans were being ground and what the weather pattern was in Amsterdam on that particular day of that particular week, is always to have the perfect humidity in the coffee beans. And that’s where the customer phoned us and asked us to come in. And what we actually did with sensors and AI regulating based on what the climate system was telling us that day, we were actually able to produce the perfect constant humidity for our end customer, which is actually a great use because it would be absolutely impossible for the human being to take all these various elements into consideration. That’s where the computing power from AI, and in particular, one of our companies in the Orise group, really came forward and was able to provide that very, very unique solution. To my mind. And from what I understood, I don’t think there’s too many companies out there that can actually do that in the world. We were able to do that. And that’s just a really, really good story for everybody who joins a nice hot cup of coffee in the morning. So that’s a good, positive story there for AI.
SD: It’s a great story.
GR: Let me share another story that’s not quite so profound. And it’s a it’s a it’s my own personal story on how I’ve used AI at a recent annual leadership conference. In my presentation, I ended up using AI to generate a lot of images, and what was fascinating to me was one how easy it was within a few prompts, you could generate these great images that were completely in line with the topic that we were talking about. Images are beautiful, crisp, colorful, really eye-catching. But then if you want a surface below that, you looked at some of these images. They were all kind of slightly off. I remember somebody telling me, yeah, that was a great image of, you know, people sitting around a boardroom table. But if you look carefully, that person only had one leg. And these were kind of things that you wouldn’t pick up on if you just flashed on the screen. But if you actually thought about it and dug into it, you’re like, that’s really weird. And it was a great example of how in the right circumstance, AI can be really cool. And if you’re just flashing images very quickly, people aren’t going to notice you make a great impression. But if you’re trying to make a point, you’re trying to win a customer over, for example, how accurate something is really does matter. And so that was in the context of where it was a presentation internally to all of us. And if there is these slight errors, it really didn’t make much of a difference. It didn’t really matter. But in a different circumstance it might have. So to me it was a good reminder of something powerful but still got to be careful.
SD: Yeah, I was just going to say the same thing. That’s another example of how powerful it can be. But it should be a supportive function and with clear intention and always. And we’ve, as we’ve discussed with human oversight, I don’t know if AI, I don’t know, maybe, but I don’t know if AI will ever be able to entirely replace a human. For now, that human oversight remains very important. So as we wrap up, I’d like to leave our listeners with some tangible takeaways. Gord, Christopher, what are the top 2 or 3 things that every company should be doing right now to strengthen their AI governance and risk mitigation strategies?
GR: Maybe I’ll give two. And then Christopher, you can chime in. I guess my first is I think all organizations should have some sort of rules on AI use because without any kind of rules, it’s guaranteed that something is likely going to happen that you don’t want to happen. It’s guaranteed you’re going to have some unintended consequences. The rules don’t need to be perfect. You can take baby steps. And your employees, I think, will crave it. Your employees will want to do the right things. So give them some guidance on how to do those right things. So give them a simple set of rules to follow. And then secondly, inherent in those rules, I think is a reminder to everybody. And this has come up. Both of you have said this before, have a reminder that ultimately, human beings are still responsible in some way. AI is a tool and human beings use tools, but human beings are ultimately responsible for the outcome of how those tools get used.
CG: And it’s very much in agreement. I think it’s like taking it down a level to what you just mentioned, Gord. I think the guidance I would have possibly to people like just embarking on the AI journey would be map out where AI is currently being used or where it could be used in your business because quite frankly, you just can’t govern what you don’t know. So, that that’s I think is probably like for me, step number one. And you said it perfectly just now. Then then you need to establish the governance framework. And then we’ve talked a little bit about that a couple of minutes ago. Cross-functional oversight is so important and accountability. And then the third element I think is investing in education. And if you get those well, then I think you you’re on a good path.
SD: Gord, Christopher, thank you both for your insights and your candor. To everyone listening, it’s clear that AI isn’t just a tech challenge, it’s a legal, ethical, and governance challenge. And if we get it right, it’s also an enormous opportunity. Thanks for tuning in. In a future episode of Enabling Automation, we’ll dive more deeply into upskilling for an automated future, preparing people, not replacing them. Until then, stay curious, stay responsible, and stay human.
Other Podcast Episodes
Season 3, Episode 1: Making AI Real
Season 3, Episode 2: Establishing Mutually Beneficial Relationships
Season 3, Episode 3: Globalization of an automation company
Season 3, Episode 4: Why should vision systems be included in your automation project?
Season 3, Episode 5: Reducing risk in the automation process
Season 3, Episode 6: Automation roadmaps and smart factory integrations
Season 3, Episode 7: Reducing cost and improving function in new products