Back

Leading Through Transformation

We’re excited to bring you the fourth season of our podcast series, Enabling Automation. This monthly podcast series brings together industry leaders from across ATS Corporation to discuss the latest industry trends, new innovations and more!

In the ninth episode of season 4, we welcome host Sarita Dankner who is joined by Steve Emery and Miroslav Kafedzhiev to discuss leading through transformation.

What we discuss:

Is AI transformation about strategic leadership or new technologies?

What do leaders need to do to make change feel more purposeful rather than disruptive?

How do you keep everyone aligned during an AI transformation?

Transcript

SD: Welcome to the Enabling Automation Podcast series, Season AI and Automation Shaping the Future Responsibly. The podcast where we bring experts across the ATS group of companies to discuss topics relevant to those using automation in their businesses. My name is Sarita Dankner, I’m the Associate General Counsel and Corporate Secretary at ATS Corporation, a global leader in automation solutions. I have over 20 years of legal and compliance experience across public and purpose driven organizations, leading legal strategy, commercial governance and risk programs with a focus on enabling responsible growth that aligns opportunity, innovation and technology with governance and ethics. Today’s episode is about leading through transformation, an executive perspective on AI, automation, and purposeful change. Is AI transformation about a shift in technology or is it more than that? What does it take to successfully lead an organization through AI transformation? My guests today are Miro Kafedzhiev,  President of our Industrial Automation segment, and Steve Emery, Vice President of Global Procurement. Two executive leaders at ATS who regularly lead transformation through both business outcomes and enterprise discipline. Miro and Steve, thank you for joining us.

SE: Thank you, Sarita.

MK: Thank you. Glad to be here.

SD: I want to start with the bigger picture. Artificial intelligence AI is obviously a very broad term and can mean many things, but generally speaking, it has a way of looking like a tool upgrade, a new capability, a new platform, a new system roll out. But what I’ve seen and what I believe many leaders are living right now, is that AI transformation is actually a change in how decisions are made, how work gets done, and how trust is built. So, Miro, let me start with you. I would suggest that achieving an AI transformation that is successful and that sticks is more about strategic leadership than just implementing a new technology. What do you think?

MK: I would normally, agree with you. Just, we have to probably take a step back and think in perspective. Yes, indeed. This is a new technology, a new capability, a new way of doing business, a new way of having our lives and living our lives. But at the same time, it’s also about how you, as the leader or I as the leader, take this opportunity and implement it. So it’s all about how would you deploy it as a strategic enabler, rather than just the new technology that comes in? So that’s why the most important point is not going to be about getting on the hype of artificial intelligence, but actually trying to understand what would be the cases, the use cases where it would make sense for your particular business your particular operation.

SD: Makes sense. And in your opinion, what do you think the objective or, you know, the focus needs to be to keep everyone in tune with the right goals and outcomes? Is it the customer experience? Is it productivity, quality, safety, or something else?

MK: There are two ways of how companies are looking at this.  One way is what are the quick wins or the low hanging fruits we can we can get by deploying an agent here, an agent there in order to speed up our documentation, our calls with customers, our internal queues, etc.. So that’s good and great, but it’s only an incremental small gains that you can get. The second approach is you need to get to a vision. What is your vision about how this technology is going to come in, is going to enable you?

SD: Very good. Steve, Same question to you. Do you agree that it’s more about leadership than just the technology?

SE: I do agree, yeah. I think, AI tools can be very powerful, but we’re talking about AI transformation. So there’s AI and then there’s transformation. Technology doesn’t drive transformation. People do. Processes do. Culture drives transformation. So as a leader, leaders are continually setting the tone, shaping mindset and shaping behaviors of their teams. So to deliver lasting value in using AI or any other technology or indeed any other change, we need to consider what it means first for our business, what it means for business performance, what it means for our customers, shareholders and our employees. So we start with defining the business value that AI can bring. And then we think about our people, you know, what does it mean to them? How is their success being measured and how will we change how their success is measured in the future with AI? You know, employees could feel threatened or could feel empowered by AI. And the way we set the tone, the way we position it as leaders is important. So for me, it’s very much a strategic leadership issue.

SD: What do you think that leader needs to do to make that change feel more purposeful rather than disruptive for the people that are making the change?

SE: Yeah, for the most part, people like to understand the why. They like to understand why it matters for them and why it matters for the business. So as leaders, Miro talked about the vision and, setting the vision. I think that’s really important. But we’ve got to be consistent and usually repeated in our communication around the change. So how does the AI improve people’s work? How does it strengthen the organization’s ability to meet its mission, and how does it benefit our customers, shareholders and employees? You know, we need to paint that picture that unlocks what the future will look like and what value the AI is bringing to that future, so that people can understand it and then get behind it.

SD: And I think that’s very true. Now let’s dig deeper. So in my experience, every real transformation forces leaders to confront at least two things. One is uncertainty because you can’t predict everything that could happen. And two is identity. And Steve, you spoke to this a little bit. People wonder, what does this mean for me, for my role, for how we win, how we move forward. So, Miro, what did you have to learn or even unlearn as a leader? When AI first entered the picture or continue as AI continues to enter the picture, how is your leadership style evolving to help drive this transformation?

MK: For me, AI or Large Language Models (LLM), they came much earlier than the wave of you can call it the 2021 forward. The question, now is about how do you harness it to the next level? So to your point about what I had to learn or unlearn, it was more about the tools rather than the methodology, because, again, the current model and the current level of Large Language Models is, obviously light years away from where it was ten years ago. And it offers much more opportunities in terms of faster deployment. So what I needed to, to unlearn is that, at certain points, you just have to, to, to seize the opportunity and you have to take a leap of faith. The technology has evolved to a point where it’s actually no longer a leap of faith. It’s most probably a controlled descent. So that’s why, it is more about the trusting and using the technology rather than,  being cautious about what it can do. Is it the bubble, is it not a bubble? Seize the opportunity. That was the thing that I had to change.

SD: Were there any hard truths that you had to say out loud to yourself or to your team to get you to take that leap of faith?

MK: Yes. The hard truth that I had toto say to our teams is that, if we continue in the way that we currently are doing, we will continue to be just one of the crowds and the only way how we can differentiate is we have to take and do things different, because otherwise the definition of madness, doing the same thing and expecting different outcome, right. So that’s why we had to try different things, even if they were, well, maybe we tried it in the past or well, this is not going to work. The technology is not there. Well we will just have to try it and we will win it along the way.

SD: Great. And Steve, what about you? Is your leadership style evolving to enable AI transformations?

SE: I think it is, but like Miro, I think it started evolving years before AI transformation started. You know, in supply chain data analytics has been a key essential capability that we have had to develop within our group. And the necessary function years before AI was available to us. The capability development in this area was a journey, and it shaped the way I think about using technology in the function to get the greatest value out of the function that I’m leading, and in this way, I think there’s a there’s a parallel there. You know, I thought about the org and talent development needed to enable us to use data analytics in the most effective way for supply chain years before AI. And I think this is a continuation of it, since then, the data analytics function is evolved in several different directions to use digital technologies to help us be more effective, more efficient in supply chain operation. And so the way that we’ve evolved the team over time has changed. And the emphasis that we’ve placed on acquiring and developing talent that has the skill set in the use of the technology is now kind of normalized to us. So when I think about what AI can do for us in the future in supply chain, it’s an extension that started with that whole data analytics and then digital supply chain capability that we’re already on the journey for. So extending this new capability into AI in a way that’s meaningful for our business performance is part of how we think about it. And that aspect hasn’t changed, I don’t think.

SD: Ok, so what I’m hearing from both of you is this is a continuation of a journey that started a while back. Steve, any wrong turns along the way that have maybe taught you something not necessarily about AI, but about leading through uncertainty?

SE: Well, yeah. What I think is that initially, like lessons learned in hindsight, I think my thinking around how to use this technology was very narrow. It was very applied to a set of supply chain opportunities that we saw at the time. And I think that maybe I was slow to realize how powerful that capability could be in other areas of the business and, and in other functions, and certainly now with the capability that AI toolsets can bring us, it’s kind of sharpened all of our minds to what it can mean for the business overall. So I think that for me, it’s now about thinking a lot broader about what the type, what the technology can bring to the business and about how it can improve the results in more areas. So for me, it necessitates more capability in the use of the technology, but within different functions across the business in a in a coordinated way, so that we can drive forwards together using the technology.

SD: Very good. I believe that one of the leadership traps in transformations is believing that speed equals progress. When sometimes speed just means you’re moving quickly in the wrong direction. Which brings us to the tension that I think every executive feels from time to time, and would especially apply during an AI transformation, which is about moving quickly and purposefully without breaking trust. So, Miro, I want to ask you about pace. It often feels like we’re moving forward at lightning speed. How do you manage the tension between moving through an AI transformation quickly, but still considering good governance and responsibility?

MK: Well, this transformation, like any other transformation, is about people. And, people will always be the slowest moving part. Therefore, the pace at which the overall transformation will go through. So that’s why the pace has to be bound. And this is where you, as the leader, need to keep your head cool about not just the shiny toy that is going to come in front of you and it’s going to solve your world problems. But the fact that, behind that, you’re going to have several hundred or several thousand or several tens of thousands of people that you need to bring at the same time in the same place, which takes years, not just days, not just months, but again, what is the speed of movement, the human being.

SD: Understood. And what about you, Steve? How do you manage that tension between wanting to move quickly through a transformation, an AI transformation, but still considering what it means to have good governance and responsibility?

SE: Yeah, I think when it comes to pace, you mentioned about the pressure that’s on the leaders, and I think that pressure is there. And it’s important as leaders that we understand the competitive landscape because in the end, all businesses are in competition. And, you know, all businesses need to try to outpace their competition. And with respect to the deployment of a digital capability like AI, that that can be transformational in business performance. It’s really important that we keep pace with the industry and that we figure out how to use it as a competitive advantage for our business. So there’s a lot of pressure that’s going to drive the pace in in our adoption. So in terms of how we do that, I mean, practically we need to pilot new capabilities fast and then scale them up in the business and, you know, for us In ATS, we’ve got the ATS Business Model, which we can use the tools in the ATS Business Model to do that. We can we can define the opportunity as a type three problem and then use problem solving and kaizen to accelerate that capability, so that we can pilot fast and then scale up. We need to when it comes to speed, it’s important in that process that we test the real process with real users. And in that pilot, we can use it to identify risks and make sure we’ve got effective controls in place. Now, the flip side to that is making sure that we’re moving at pace, but moving responsibly, at pace. So we need to have guide rails set in, how we deploy this and how we set those projects up in the beginning. And the guide rails need to be built, in my view, through cross-functional governance, so that we don’t have any siloed thinking and functional thinking in the way that we set up those pilot projects. But we’re taking into account the complete business. So the, you know, the perspective from the data, from the operations, from procurement, from legal, from H.R, the ethical considerations, all of these things need to feed into how we set the guardrails to drive the, the pilot process.

SD: Okay. Excellent. So I think as an extension of good governance and responsibility, I want to talk about trust. So beyond this, the systems implementation, I believe that AI potential can only be fully realized if AI is both trustworthy and trusted. So I read a global study by KPMG and the University of Melbourne on AI that found that 66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits. But trust remains a real challenge. Only 46% of people globally are willing to trust AI systems, so that means that more than half of people globally are unwilling to trust AI, which reflects a clear tension between its perceived benefits and risks. So Steve, I’d like to ask you, when trust in AI is apparently a little fragile, what does leadership need to do differently to be effective through an AI transformation?

SE: I think that there’s two levels of trust. One is in the what AI is telling us, and one is in trusting how the transformation will benefit the business. I think that in the second case, we need to show early tangible wins that benefit people as well as benefiting the business. And those can come in several different forms. In supply chain, as an example, we have many roles which can add a lot of value to our business through creative thinking, analytics and planning.  But there’s also a heavy workload in administration, including data mining, data entry into our systems. The digital tools and AI can help us accelerate. So when our people can see that AI is being used to reduce their workload through less administrative burden, giving them faster answers, allowing them to make faster decisions with fewer errors they understand it will benefit them. Then people have more time to think creatively and strategically, and to use their brainpower to add value to the business. So we got to think about this in the context of how we can shift people’s roles to add more value to the business by reducing that administrative burden on them through the use of digital tools and AI. And I think as leaders, we need to show specifically how those roles can evolve. Again, set that vision and the direction and then demonstrate it. At some point you have to try it, but demonstrate it, showing people that where they’re spending less time on those administrative tasks, they are beginning to add more value to the business spending more time on strategic tasks. And then when the metrics and measurements and the KPIs that you’re using for those people demonstrate the impact that they’re having through their new way of working with AI, then I think you’re on the winning path, and people then begin to build up trust in the process.

SD: Yeah, I think that’s a little bit like the leap of faith that Miro may have mentioned earlier,

in the podcast. Miro, how do you do it? How do you build trust in your business environment where everything is evolving so quickly? And we can’t fully predict outcomes?

MK: Probably the most important thing there is you don’t make the next step before you can feel full use of the previous or you have fully decided that it’s no longer valid or usable. What I mean is don’t go for the next toy before if you have got the gotten the full benefit of the current one. Yes, the new tools are going to come forward, but again, they’re going to be used by people, and people will not be able to switch to, to a to a new thing every single day. So that’s why the most important thing here is about sustainability. You are implementing a particular tool, whether it is an agent, whether it is a model, whether it is an AI powered module that that you’re putting in, you have first to, to, to, to get the use out of it, which means that you have to be able to sustain and repeat. And only after that you go to the next shiny toy. One of the things that I used to do is I used to sell a lot of Android based devices. And,  this was the time when, 3G came in, 4G came in, 5G came. Actually 5G networking came in China much faster than anywhere else in the world. At that time, the question was should I be using the latest leading edge Android tool or Android system? Or should I be using an Android system that might be maybe a year older, but it’s stable. I have debugged it and I know everything about it, and I’m able to deploy it without any hiccups. No, it was not leading edge. No, it was not with all the shiny bells and whistles, but actually the outcome and the benefit for the customer was I don’t need the latest Android system. I need a system that actually works and that is not going to break down tomorrow. So that’s why sustainability is the name of the game when it comes to the even in a in a fast environment that everything is turning forward is, you need to make sure that what you have deployed actually works before you go to the next one or it doesn’t, and then you make the change.

SD: I also read a report that was based on Edelman Trust Barometer data showing that trust in AI varies by country. For example, it’s quite high in India and China. 77% in India, 72% in China, but much lower in the U.S at 32%, Canada’s at 30% and Germany’s at 29%. So that means to me that the pace of AI adoption won’t only be determined by how fast the technology advances, but by the willingness of people to use it, which may differ by country. So Miro, when you’re leading a global organization with different customer expectations, different employee concerns, and different regulatory and cultural contexts. What does earning trust look like at the executive level? When you’ve got all those different dynamics around you?

MK: So this is an important question. And we have to actually peel down one layer below why you have the cultural differences. It actually has to do with the way of the you can call it society government contracts. Right. Or the public contract. So you’re having a much more central government engagement. The more to the east  you go in countries like India and China where you have much bigger conservatism in Europe, in particular in Germany, about privacy in all possible shapes and forms. And then North America is, I’m not going to say in the middle, but it is, much more conservative than the far East and much less conservative than Europe. So these are concepts and things coupled with legislation that you have to take into account. That’s why, for example, this is no different than the export controls compliance. You just have to go through it. You just have to comply with it. And you have to them understand what is the advantage or of a disadvantage for you to use a particular type of technology. So I perceive this is no different than the data privacy policy is no different than the export compliance policies is the way how business is done. You have to incorporate, but you have to incorporate it in your structure, which means I’m always a big supporter of local for local, this means that you have to take into account the local requirements. So that’s why when you do a local for local, even with AI, even though AI actually makes much bigger globalization play, it also allows you to customize and for example, set up the set up that says, okay, I’m in the US, use the local setup for data privacy and data governance. When you go in, you built an agentic model.

SD: Steve, what do you think? How do you convey confidence but also transparency in such a dynamic environment, especially when we don’t have all the answers yet?

SE: Within the organization, again, it’s about setting that common vision, common goal that we have as a business. That’s the starting point. And we have we have common, common goals globally across the business, no matter where we are. And then after that, we’re all learning together as leaders, we can be confident in where we’re going and confident in communicating those goals across the business. And we’re also confident in our team’s ability to get there. That’s a good starting point. Then, in terms of how we deploy that in different regions around the world, we know that different regions will respond differently to change, whether it’s trust in AI or another type of change. But what I found works the best is that when we go and work specifically in those regions and with the people, we can we can use the same playbook with the same goal in mind, and the same end vision. Then how they get there will be different because of their different way of thinking and their different approach to the challenge in front of them and their different trust in the outcomes that’s there. Like the, the, the different cultures are important in that. Nevertheless, if we go there and work with them, then they’ll move there in their own way and at their own pace. So I wouldn’t expect within our different, regions of operation, I wouldn’t expect the teams all to get there at the same time. But I know how they move and how they’ll get there. So if one region is moving at a slightly different pace to another and they’re figuring it out in a slightly different way, that’s fine. As long as the vision and the end goal is the same place. That’s the way I would set it up.

SD: I’d like to move on to the concept of alignment, because transformation can’t be led by just one function or one leader in isolation. So Steve, over to you again in any organization, goals, incentives, priorities, risk, appetites differ. We’ve already talked a little bit about that as it relates to trust. How do you keep everyone aligned during an AI transformation within those different contexts? I know we talked a bit about trust and that everyone’s going to move at their own pace, but how do you keep that North star the same? The goal, the objective and the final vision?

SE: I think in ATS, we do make it quite easy for people because we align through our shared value drivers that drive business performance and our ATS core values of people, process and performance. This is who we are as a business and what governs everything we do. Using the ATS Business Model. And that includes how we use AI or any other technology or tool that’s available to us. So having an enterprise wide shared system and principles in that way actually is the foundational platform and sets the principles of how we operate. And because that’s consistent and common in everything we do, I think it’s it makes it easy for us to use that thinking and apply that logic when it comes to things like, AI deployment, you know, it means that, it keeps us all aligned regardless of those local norms and things like different risk appetites of individuals or groups or teams. These are the things that guide us.

SD: I absolutely agree we have very strong, common and clear values at ATS that keep us aligned. Sometimes there are competing interests or competing priorities. Certainly every day we have to balance those. What would be your thinking around how to lead through that Miro?

MK: The importance is in the principle. And, as long as you’re able to stick and you’re able to drive back to your strategic plan and you’re able to deploy it, with a clear understanding that the lowest, lowest level person on the field or in the, in the, on the production line will be able to understand it. That’s the objective. Which means when I go to ask, what do you think is our objective forward? The answer would be we want to diversify and do more than just transportation. For example, in Industrial Automation, or we want to be leading in global nuclear automation. So if the front end person down has been able to understand the very simple concept, then the process works. If it is, I have no idea. We never talk about that. Then you definitely have a breakage. So that’s why the important part about the alignment is in a lot of places. And a lot of times it is call this over communication, actually call it into the ability to streamline and get the first the alignment on the strategic plan of your team, not do it in an isolation know dark room with your strategy leader. Get the alignment. Get the team engaged when you build it. Why? Because after that they know that they will need to go and execute it. So that’s where the alignment is going to come.

SD: This is where I think the executive role becomes very specific, especially when we don’t have all the answers. But we need to create conditions where an organization can move forward together. And as you know, you mentioned before with shared principles, I think that’s key and clear accountability. Which leads me to the last big leadership challenge we’ll discuss today, which is about making it stick. A lot of transformations start strong, but then they lose altitude. McKinsey reported that while 56% of respondents said their organizations achieved most or all of their transformation goals, only 12% said they sustained those gains for more than three years. So, Steve, what does it take to make transformation durable? What kinds of leadership habits prevent an organization from treating AI in this context as a one time program instead of a sustained capability?

SE: Yeah, AI is a it’s a capability, not a project. It’s not a one. It’s not a one time thing. As we embed that capability into our workflows and into our processes, we expect to see certain results from it, and we expect to see a positive impact on our KPIs. That’s where we would start to see if we’re having the impact that we expect to. And if we’re not, we, we react to that. But we also need to take time to measure the utilization of the new tools that are being deployed. This is something we’ve done in supply chain, and it’s been very effective where you see and measure the adoption rate of the change of the new tools. And in doing that, it enables us to see where in the business we’ve got high adoption, where we’ve got low adoption, and then we can respond to that and drive the adoption using that. And also celebrate adoption milestones with the team as we see the use of, the use of the tools grow. And then with this we start to see the results being achieved, that we expect to achieve through the utilization of the AI. Of course, as that happens, as it begins to happen, future goals will be built on the improved performance that we’ve now unlocked. That’s normal in business. So as that becomes the norm, the KPIs reflect that in the ongoing targets year over year. So while adoption speed may vary, our approach to continuous improvement and the mindset that that brings actually supports the consistent adoption across the business because it’s resetting the norms in terms of performance level, I think that just using that performance measurement feedback reaction and continuous improvement countermeasure is, is a cycle that will help drive this whole process forward.

SD: And really what I’m hearing is embedding it into the ordinary course and then building on top of that.

SE: Exactly. Yeah.

SD: And Miro, from your perspective, what does making it stick look like? This is a very rapidly evolving space. How do we make sure we’re not creating change fatigue for our employees? And how do you keep that energy and clarity high over time?

MK: There is a very good, analysis and work done in a book called Management at the Speed of Change, which basically says that, the only certain thing you’re going to have is going to be change, even though we don’t like change, I don’t like change. I sure doubt you guys like change. But, change is inevitable. It will continue to go faster. So the question is how do you get your organization to be changing continuously and changing the speed that the change is taking place, the making it stick, that is the most important. It’s not about the how you’re going to use it, but also how you are not going to use it. What will be the cases that you will not go in? You’re not going to do that? I think this the combination of the two is what makes something like this stick, because otherwise you can say you have to use agents for everything your Outlook, your meetings, your summaries, your PowerPoints, your Word documents,  your email drafting. No. In something like this, you as the leader have to set clear boundaries of where you will not be using it, because it doesn’t make sense.

SD: Very good. All right. So thank you both for all of that. I’d like to wrap up with a quick lightning round. I’m going to ask three questions. And for each one I’ll ask either of you to just please jump in with a really brief response. So number one, what’s the one essential leadership behavior you’d say a leader needs to adopt during an AI transformation?

SE: I would say encourage curiosity. Curious discussion, curious questions and answers with the team because it creates trust. It accelerates everybody’s learning and it invites participation of the team.

MK: Patience, AI is driving, things fast. So how do you manage it? Patience.

SD: Excellent. Okay. Number two, what’s one leadership non-negotiable that you need to put in place to protect trust during change?

MK: The ability to drive a repetitive check of sustainability. Is this thing still making sense? So just to keep checking in. No regrets. You move on. No regrets.

SD: What about you, Steve?

SE: I think mine will be goal clarity. Making sure that we’re clear on what we are measuring and why in the way that we’re deploying this so that we’re avoiding the shiny toy syndrome, where we’re doing this for the purpose of a clear business goal, rather than because there’s a shiny toy we think we want to use.

SD: I like that, the shiny toy syndrome.  I think you should coin that phrase.  Okay. Number three, if you could give one sentence of advice to a future leader about leading change in the age of AI responsibly, what would it be?

SE: Maybe it’s a bit repeated, but have a clear vision regarding the end goal and then be honest with people about what you know and what you don’t know, and then include the team in the journey to get to the vision together.

MK: When you’re moving forward, do not compromise on your core values. Maybe you need to change your core values, but whatever they are, do not compromise on them.

SD: Miro, Steve, thank you both for sharing your experience and your valuable insights with us today. To our listeners, thanks for joining us for this episode of AI and Automation Shaping the Future Responsibly. It’s clear that strong and intentional leadership through AI transformation is key to successful and sustained results. Stay tuned for future ATS podcasts. And when it comes to AI and automation, always remember to stay curious, stay responsible, and stay human. Thanks for listening.

Host

Ben Hope

ATS Corporation

Ben has 25 years of experience in the automation industry, spanning both technical and commercial roles. He’s seen firsthand how technology can transform every phase of the automation lifecycle, from concept to engineering to assembly,  integration, operation and service.

Guest

Steve Wardell

Manager, Imaging