Enabling Automation Podcast: S4 E2

We’re excited to bring you the fourth season of our podcast series, Enabling Automation. This monthly podcast series brings together industry leaders from across ATS Corporation to discuss the latest industry trends, new innovations and more!

In the third episode of season 4, we welcome host Sarita Dankner  who is joined by Sheila FitzPatrick to discuss what leaders need to know about foundations of responsible and ethical automation.

What we discuss:

  • What is responsible AI and why is it important?
  • What is driving the urgency for global standardization?
  • Is compliance enough when it comes to AI?

Host: Sarita Dankner, ATS Corporation 

Sarita is the associate general Counsel and Corporate secretary at ATS Corporation and has over 20 years of legal and compliance experience across public and purpose driven organizations, leading legal strategy, commercial governance and risk programs. She has a focus on enabling responsible growth that aligns innovation and technology with governance and ethics.

Guest: Sheila FitzPatrick

Sheila is the Global Ambassador of the Global Council for Responsible AI and a globally recognized expert in data privacy and protection.

——Full Transcript of Enabling Automation: S4, E3—–

SD: Welcome to the Enabling Automation podcast, where we bring experts across the ATS group of companies to discuss topics relevant to those using automation in their businesses. My name is Sarita Danker. I’m the associate general Counsel and Corporate secretary at ATS Corporation, a global leader in automation solutions I have over 20 years of legal and compliance experience across public and purpose driven organizations, leading legal strategy, commercial governance and risk programs. With a focus on enabling responsible growth that aligns innovation and technology with governance and ethics. Welcome to season four with a powerful conversation about AI and automation. Today we’re going to talk about the foundations of responsible and ethical automation. My guest is Sheila FitzPatrick, Global Ambassador of the Global Council for Responsible AI and a globally recognized expert in data privacy and protection. Sheila, thank you for joining us.

SF: Sarita, thanks for inviting me. I’m excited to join this. I think we can all agree that AI can do amazing things. It can enhance productivity, write music, design art, write and debug code, even help diagnose diseases. It’s probably the most exciting technology that has surfaced since the internet was created.

SD: So, Sheila, what is responsible AI? Why is that important?

SF: That’s a really great question, Sarita, because, you know, as you said, I mean, AI has really kicked off and people have really jumped on board the AI bandwagon and there’s some phenomenal things being done. We really need to look at the at the responsible side of it, which is a set of principles that really prioritize human well-being and ethical considerations. When organizations are designing or deploying using AI, especially around the principles around fairness, transparency, accountability, privacy. As you know, privacy is my number one concern in all areas. And security. So making sure that we’re taking advantage of the benefits of AI, but not putting organizations at risk when it comes to looking at the ethical use of it, making sure that we’re finding that right balance.

SD: Yeah, exactly. I agree with all of that. It is so important. Maybe let’s step back a little bit and you can tell us a little bit about the Global Council for Responsible AI. What is it? What is its mission and how is it helping us to shape standards globally?

SF: Yeah, I’m really excited. I was invited to join about almost a year ago, and then I took the oath of office for the as a global ambassador in September at the House of Parliament in, in London. And that was quite exciting. And what the Global Council for Responsible AI is doing is it’s really looking at how organizations can deploy and develop AI and ensuring that we’re really looking at the alignment of values around certifications, around human dignity, trust the environment, safety, long term accountability. We’re working with regulators. We’re working with public authorities, working with technology companies that are developing AI. And our mission is really to ensure that as AI evolves, so does the trust in the environment and the use of data is being done in a way that still upholds the values of human dignity, privacy, security. And it’s a very exciting time. We’re also looking at education. We’re building standards, we’re building guardrails because none really exists today. And it is a global alliance. So we have a group from all over the world, every continent in the world, almost every country in the world is represented. So we have a great group of people, some focused on the technology side, some focused on the legal side. Obviously, I’m focused on the privacy legal side. And then, certainly people who are involved in the training and deployment of AI.

SD: That sounds awesome. And it sounds like it’s, such an important mission and cause. What do you think is driving the urgency for this kind of globalized standardization right now?

SK: Well, I think the real reason around it is because, you know, AI although it’s been around for a while, has all of a sudden just taken off like a wildfire. And what’s happened is a lot of organizations are just jumping in, you know, feet first. And there hasn’t been a lot of due diligence around, you know, what are we really trying to do from a governance, a privacy, a security point of view, they’re getting excited about the cool factor of all the benefits of AI, but they’re really not understanding some of the negatives. And there are negatives out there, and that we have to acknowledge that there are negatives, there’s deep fakes, there’s AI scams, there’s unethical use of personal data and even proprietary and confidential data. So what’s happening is as organizations are becoming more and more involved in the deployment of AI, we need to help organizations take a step back and say, okay, what are you trying to achieve through AI? And how do you ensure that you’re building trust not only among your customers and your partners, but your employee base as well? That’s really critical, is making sure that as you adopt or consume more and more AI that you’re looking at, what are you trying to achieve? How is it going to impact your organization? How is it going to impact individuals as a whole? How does it impact human dignity and the right to privacy and the right to own your own personal data? So it’s more of an education so that organizations can take a step back and say, we want to take advantage of AI, but we need guardrails and we need standards, and we need some kind of guidance so that we make sure we’re developing it and deploying it in a legal and ethical manner.

SD: What would you say to those people who, I mean, I don’t think anybody would disagree that, you know, legality and ethics are important, but what about those people who say something like that? If, you know, we attach too many rules to it, we’re going to stifle creativity or lose competitive edge or discourage innovation. What would you say to people like that?

SK: Well, you know, I hear that all the time, as you know, because I think people look at privacy the same way and they say, you know, every time that someone brings up the around privacy, it’s like, oh, no, now we have to go back and do this assessment. And, you know, we’re going to stifle innovation. You know, I like to tell the whole point of this is not to stifle what you’re trying to do, but it’s also to not only protect individuals, but to protect the organization, because there’s some pretty severe sanctions with the new AI regulations that are out there in Europe and Asia Pacific. Australia has some new laws that have gone into effect even in the U.S., which is very slow at adopting any kind of regulations that would stifle innovation, are introducing new regulations around AI. And so organizations need to understand that you still have to do your due diligence. You still have to understand what you know, as I talked about earlier, what is going to be impact on the organization and individuals as you’re trying to achieve your AI goal, not trying to stifle you, but getting you to think about what data do you need? What is your model look like? What organizations are you partnering with to either buy their technology or implement their technology, or partner with them to develop technology? What are the questions you need to ask? What’s the assessment that you need to do? And just make sure you get it right because you can’t. Unfortunately, with AI, once the data is out there, you can’t get it back and you have no idea. There’s no guardrails around what you’re doing with that data, what the lawful basis for using that data is. You need to make sure you’re doing your assessments upfront so you’re not putting yourself at risk, your organization at risk, and your employees and your customers at risk. And so it’s really just, you know, more of a matter of not stifling you just want to keep you out of trouble where you don’t want to sanction 5% of your annual turnover.

SD: No, you don’t. Now, so obviously compliance is often, a focal point or an entry point for many companies. But I believe that ethical leadership is what really builds trust long term. So what are your thoughts on that? Does legal compliance always equal ethical practice? Or to put it another way, is compliance enough when it comes to AI or is there more to it?

SK: No, there’s a lot more to it. And that’s actually a great question. You know, if you think about AI, it’s really a paradigm. We’re looking at the world in a different way. AI is a tool that is changing our lives. It’s sometimes for the better, and sometimes we have to take a step back. But it involves all aspects of an organization, not just compliance. Compliance is there really to help you with the rules and regulations and guidelines about what you can and cannot do. But you know, HR is involved when it involves employee data and how they want to use AI, certainly technologies involved, because they’re the ones that are going full force in with the new technology and trying to roll out applications built on AI. Leadership, absolutely critical that leadership is involved because leadership is the face of the organization, and you don’t want to lose the trust of your customers by deploying a technology that can cause harm to the organization or to an individual. And so it’s a really a paradigm shift that makes everyone involved in the lifecycle of AI a steward of that technology and a steward of the data, whether it’s I’m not just talking about personal data, it could be confidential data of the organization, proprietary information. You can have your customer data in there. It’s all the data that you’re collecting from the time you collect it to the time you destroy it. And every aspect of the organization has to be involved.

SD: Understood. Understood. And are there, you know, from your perspective, are there frameworks or guiding principles that you would recommend to help companies build a more conscious and ethical AI strategy? Where should they begin?

SK: Well, you know, that’s really interesting because that’s part of the reason why the Global Council for Responsible AI was started was to really build a lot of those standards and those guardrails, because none really exist today. I mean, we certainly have you know, as I mentioned early, the EU AI act is in place and it does provide direction and guidance to a certain degree. But there’s a lot that’s missing today. We don’t have a lot to rely on. So what’s happening is organizations that are building the AI technology, very large, very prominent organizations are sort of defining the standards, but they’re defining the standards in a way that benefits what they’re trying to achieve, as opposed to a more holistic view of, you know, the whole lifecycle of AI and cooperating every aspect, what you’re doing with the data, what you’re collecting, what’s the ethical use, how does it impact the environment, how does it impact human dignity? All those things are not quite in place yet. That’s a big part of what the GR, you know, the Global Council for Responsible AI is doing is building those standards and education certifications.

SD: So important. And you mentioned before about the role that leadership has in espousing these principles for responsible and ethical use. And often we see, you know, it’s sort of the tech leaders that are at the forefront of all of this. But what about the non-tech leaders? Do they have a role to play? You know, people, for example, in HR operations or engineering, legal, you know, where do they fit into all this?

SK: Yeah, they absolutely fit into it. I mean, obviously with technology, they’re the ones that are either developing or trying to deploy the AI technology. But you ask about HR. Well, HR certainly wants to use and is using AI in many organizations where they can create templates in terms of, you know, employment contracts or certain types of agreements. And so they have a role to play as well, to make sure that what they’re using AI for doesn’t, and this is where I’m going to go into my privacy hat, is making sure that they’re not using AI to process personal data. So they need to make sure that they’re saying, all right, we want to use AI to help guide the organization to be more effective, more efficient, to help employees perform tasks or to, you know, complete forms or whatever they have to do in a more efficient and effective manner. But they have to make sure also that they’re not including personal data in that area. So they have a responsibility to say, okay, this makes sense to use AI for these things within HR, but not for these other areas. So knowing where the you know, I keep going back to guardrails, knowing where the guardrails are. Legal obviously has a role to play because you have to look at what are the regulations around the world on the use of AI. And there are more and more regulations coming out around use of AI and delineating between, you know, harmful AI, the risk factor risk mitigation, etc.. You know, procurement would certainly have a role to play because they’re going to be looking at the vendors that the company that IT wants to bring on boards. They need to know how to do an assessment of a, you know, an AI vendor. Your privacy program is going to have a role to play, because they’re going to have to do a privacy impact assessment on either development of AI or deployment of AI. We talked about leadership earlier. Leadership absolutely has to be involved because leadership is going to be driving innovation within the organization. And AI is the number one innovation tool today. And so they want to make sure that they’re taking advantage of what’s out there, but also working with Legal, HR, IT, Engineering, all the different areas to make sure that they’ve ticked every box to make sure, again, that the ethical use, the transparent use, the fairness around AI is being addressed.

SD: As you bring up some really good points. So thank you for that because I think it is a very holistic responsibility that we have to AI. Everybody has a role to play. I agree with that. And I think it’s probably even broader than we know. But we’ll learn. We’re learning. It’s a very steep learning curve, I think, for everyone. I want to talk a little bit about the ethical dilemmas that come up in practice, in everyone’s use of AI. So, for example, you mentioned not just in this area, but HR’s use, human resources use of AI. Bias is often a recurring concern or theme, for example, in assessing the information that is available to them. If they’re trying to process in terms of candidates or roles, but it can come up in other ways. So where does bias typically come from and what can companies do to mitigate it?

SK: Yeah, that’s a great question because bias absolutely is one of the areas that organizations are very concerned about. And it’s an area that regulators are concerned about as well. You have to remember that when you’re using AI, you know, just look at the definition of artificial, a manmade thing that trying to replicate something that’s natural, it’s an unnatural thing. And so what’s happened is the models that are that are in the AI technology, they’re being used are being provided by the developers. So you have to understand where is the information coming from that are building these models that give the information that actually build the algorithms so that you can have output through the through the AI and everything, sort of a point in time because a lot of the information is coming through what’s on the internet, what’s already online, information that is coming from the developer’s customers or internal, their internal beliefs and their philosophies. So you already have sort of an inherent bias anyway as they’re building their tool. So the outcomes you have to make sure that that someone there always has to be human intervention, there has to be someone that understands the process and understands the questions that are being asked, so that they can understand whether or not the output is truly a valid output versus, you know, a biased output that that is not reflective of, of the environment. So people are creating AI, and those biases are then being integrated and aggravated by AI itself as it compounds its knowledge. Using that same bias, it just deepens and deepens it. You know, that’s like why many of the laws, the data protection laws in particular, say that, you know, one of the of the requirements is it can’t be automating decision making. There has to be human intervention when you’re making decisions. It kind of falls into the we’re talking about HR when you’re making any decision about employees, you know, behavior or anything about an employee, whether it’s in a promotion or, you know, giving them a review or anything. There has to be human intervention. You cannot rely on AI to give you the output. That is a direct violation of data protection law. So that’s where you start to see the intersection between, you know, the use of AI and different regulations that are out there that you have to look at.

SD: Yes, exactly. There were, as I understand it, numerous and years long ethics studies that were done to inform the EU AI act, for example, and the approach that it would take that that resulted in these standards around human oversight and keeping a human in the loop, in essence, you know, for all the use of AI to make sure that we don’t lose control of all the good work that we’ve built as humans.

SK: Exactly.

SD: And I want to talk a little bit more about human oversight. I just think it’s such a critical component of the use of AI. As beneficial as AI is, and as fun and cool as AI is, it can present some concerns. Especially if people don’t stay involved and in the loop. So can you talk more about that? Just how important it is to keep a human in the loop. And maybe some examples or how where you feel we should be especially careful or maybe even not automate at all. Where should humans always stay in the loop?

SK: So any time I that that’s a great question because I think you always know my bias. So I’m going to be biased because it’s the be around privacy. I think any time that you’re, you’re talking about someone’s personal data, personal sensitive information, AI should not be involved. You know, if it is personal data, that’s not that doesn’t put you at high risk. That’s different. But there has to be human oversight on that to make sure that you understand what’s the data being used for. How transparent are you about what data you’re using? Making sure that the individual is informed about what’s being collected, how it’s being collected, what you’re doing with it, what models it’s being fed into, who’s going to have access to it, how long you to maintain it, etc. that’s critical. But I think when it comes to sensitive data, which is very high risk data I don’t believe should ever be put into any type of AI tool, the potential for harm is too great, but there always has to be human interaction. Looking at the output, you know, we talked earlier about how do you know when the output is right or the output is wrong. And that’s when you need to have human intervention with an expert like an SME, a subject matter expert on whatever it is you’re using that AI for, the output for they have someone there who actually understands what you’re doing and what you’re looking for as an outcome, and can identify right away that that’s not right. That’s not accurate at all. That’s, you know, that’s just not it’s doesn’t feel right. I know the law. That’s just not the way it operates. That’s critically important. I can give you a great example. And this is actually a personal situation that happened to me. This was about a year ago. And I decided, you know, I should look into just long term disability, you know, just to see. And so, you know, you have to fill out forms for the Insurance company. You fill out the form. And they, you know, they ask you for all this information including your medical history, provided all that information. About a week later, they came back to me, denied me long term disability because based on their algorithm that they used, I had less than six months to live. I called my doctor and asked him if there was something he forgot to tell me because for the thing, I thought I had a lot longer than that. And, fortunately he said no, he wasn’t hiding anything from me. And I actually went back to the insurance company and I asked them, I said, I want a detailed information on what they were using and the technology they were using, and they were using, you know, an AI tool that was taking samples from different environments, none of which were similar. So they were taking like, you know, just different types of people, different age groups, different medical histories, no similarities, and putting them all into the same tool, spitting out an outcome that was completely inaccurate. If I had taken that, as you know, oh wow, that’s the truth, could have negatively impacted my life.

SD: In a big way. That’s an understatement.

SK: Yeah. In a in a very big way. So I mean, that’s what it goes back to. Where’s the human oversight? Someone at the insurance company should have been looking at that and gone back to the application and said, there’s no way like this doesn’t correlate, it doesn’t match the output. And that’s where human intervention is so critical.

SD: I’m sorry that happened to you, but I’m glad that you caught it and that you’re here to tell the story today. Yeah. On this topic a little bit, I was thinking about how ethical AI, it’s not just about design. We talked about it’s, you know, having to design it and intend for it to, to meet sort of ethical standards. But I think it’s also about culture. For example, a company can have the best policies on paper, but if the culture doesn’t support them, they’re not going to stick. So what does it take to create an organizational culture that supports responsible AI? Is this a leadership issue, a training issue? Both? Something else?

SK: It’s definitely both of what you said. It’s a leadership issue. It’s definitely a training issue. It’s an understanding of what AI can do. It’s an understanding of what the outcome of what you’re trying to achieve. It’s understanding the goals of the organization. It’s also understanding what the risk mitigation of the organization is. Some companies, you know, and this goes to culture. Some companies have a very low risk tolerance because they’re heavily regulated organizations. And they’re going to say, well, you know, we’re going to be a little bit more thoughtful and a little bit more conservative in our approach to AI. You have other companies that aren’t as heavily regulated and have more of a more of a higher tolerance. I’s say to risk, not I don’t mean to imply that they don’t take into consideration risk, but they they’re more willing to take risks and to say, well, we’ll do it now and ask for forgiveness later. And those are the organizations that oftentimes find themselves in trouble with the regulators because they’re, you know, they’ve saying But we’re doing all these cool things where we’re, whether they’re developing or deploying, they can always justify the benefits of what they’re doing. Like, we want to stay ahead of the competition. We want to be able to get to the market faster. We want to be able to support our customers. And those are good things, but they don’t… And the culture is sort of around ask for forgiveness later. Just do it. Not thinking about the ramifications can really damage your reputation if you don’t take a step back. So it definitely comes from the top down, because the top is what defines, your risk profile.

SD: Agreed. Agreed. So I think I heard you say that we need to embed how AI is developed and used into the culture, really integrated into the culture itself, not just write it down on paper and then and then put that paper away. When I think about it,

I think about embedding that ethical reflection, that accountability, that social awareness that we talked about before into the DNA of AI development. Do you agree with that?

SK: Absolutely I agree with that. I think that it’s absolutely critical. And it also goes to, you know, the embedding in the education of the workforce, making sure that the workforce understands, because oftentimes you can embed it in the culture. But yet you might have a rogue group of, you know, developers or not to pick on IT, but, you know, their job is to be innovators and their job is to deploy as much technology as they can, oftentimes to reduce costs, to improve efficiency. But it’s also to educate, you know, organizations that that have a different kind of focus on what they’re trying to achieve, to get them to understand the culture of what the organization wants to do and the risk mitigation and get everybody to buy into it.

SD: And it makes a lot of sense. I think it’s so important. And then when things go wrong, because sometimes things go wrong. What mechanisms do you think organizations need to have in place to ensure that accountability and to make things right again?

SK: So I think it first of all, they need to they need to have guardrails in place. In the first there’s several guardrails. So you can have one is preventative. So to make sure that before they’re deploying anything that they have embedded monitoring capabilities and management capabilities so they can recognize right away if something’s about to go wrong, they need to have, you know, a detection guardrail so that they know immediately when something is heading in the wrong direction and they’re about to have a problem, so they can immediately say, okay, this isn’t working, we need to take a step back. And then they also need to have corrective action. So they need to have a plan on if this is going to go south. What’s our corrective action. You have to have the plan in place before the problems occur. You can’t, you know, sort of wing it and we’ll figure out after the problem occurs. And then obviously you need to have the, you know, the ethical and legal guardrails in place that say, all right, we’ve looked at prevention. We looked at detection. We have a corrective action in place. What do we need to do from a legal and ethical point of view. What’s our plan. Who do we notify. Is this something that we have to notify the regulators. Is this something that we need to do a report on? Who do we need to inform? Have a team, have it, have a, you know, sort of like you have an incident response team. You should have an AI incident response team as well. What if it hits the press that something went really wrong? How are you going to communicate? You don’t want your IT person out there

communicating to the press. You probably don’t want legal communicating to the press. You want your PR people communicating to the press if something happens.

SD: Hopefully it doesn’t go that far. But yes, I agree with you. Now AI is often deployed globally. You know, similarly, ATS we’re a global organization. And as an example and culture and ethics can be very deeply local. So how should companies manage the tension between global ethical principles and local legal or cultural expectations?

SK: You know, that’s always that is always a challenge and is, you know, with ATS being a global company, I look at it the same way as managing, you know, a privacy program in a global organization. And my advice is always build one AI compliance program in terms of an ethical AI program, you don’t want it You don’t want to roll out AI piecemeal. You don’t want to say, well, we’re not going to roll it out in this in this part of the organization and we’re only going to do it in this part of the organization because going to cause conflict. So what happens is you need to understand what are the rules and regulations in the different jurisdictions you operate in when it comes to the use of AI. And that’s also going to embed in the data protection regulations, the data usage regulations, competitive, you know, consumer competitive laws, competition laws. You’re gonna have to look at those and decide on what’s the best way to deploy this. Do you have to deal with works councils? Do you have to, you know, talk to them. And I know with ATS you do have works councils. So you’re going to have to go to them and present your, your AI plan in terms of what you’re trying to roll out. So it needs to be inclusive and understanding what are all the obstacles you’re going to run into, and what do you need to do in order to mitigate those risk obstacles? Just the way that we do it with the privacy program, you build it on one global foundation and you embed sort of the maybe not the strictest, but close to the most restrictive requirements. And then when you roll it out, there’s less chance of having any issues because you’ve looked at all of the parameters and all of the jurisdictions and the requirements and dealt with them that way, the same way we did with the privacy program.

SD: Is anybody doing it really well, from your vantage point, is there anyone out there that is successfully integrating AI responsibly and ethically into their operations across the board?

SK: You know, that’s always such a great question. And I wish I could say, yeah, there’s just one company that I really know is doing it great. I think everyone is sort of at the same stage right now where they’re, you know, 100% in on AI. They see the benefits. Absolutely. And I agree, there are some really, really creative and really important benefits of AI. But I think at the same time, they’re also facing the challenges of what do we need to do to mitigate these risks? So AI has been around, but as I said earlier, we’re in the middle of a paradigm shift. Culture is changing. AI is driving change within the world, within the organization. You know, there’s horror stories out there and they’re great stories out there. And everyone is trying to find that balance right now. So I think many organizations, most organizations that I’ve dealt with, and certainly through the Global Council for Responsible AI, everyone’s sort of at that stage where they’re deploying and they’re developing, but they’re still trying to work out the responsible AI part of it.

SD: Well it’s nice to know we’re all in the same boat. So long as we keep focused on the right things. We’ve talked a lot today about ethics and responsibility and making sure that we stay human, as we innovate. But what’s one message that you might leave our listeners with who are just beginning their responsible AI journey?

SK: I think what I’d tell them is to, you know, don’t be afraid of AI understand that there are risks out there. Address those risks. There are going to be biases, as we talked about there, going to be deep fakes out there. There’s certainly going to be AI scams. But at the same time, educate your organization on the benefits of AI, understand the jurisdictions you operate in, understand what you’re trying to achieve through AI. Build that plan. Get the right people involved. It can’t be solely a technology decision. You have to have the entire organization has to be key. And if I had to say who definitely needs to be involved, obviously IT needs to be involved. Legal needs to be involved. Privacy needs to be involved. Leadership needs to be involved. You need to define what your objectives are, how you’re going to get there. Are you going to do it internally? Are you going to do it externally? You have to look at and assess your vendors too. You know, one of the things you need to look at, and I think that you’re going to you’re having additional podcasts that you’re going to talk about some of the legal things, but just because you’re working with a prominent vendor now, when you might have a data processing agreement or a contract in place, that doesn’t mean that covers AI. So that’s one of the things that organizations need to look at, too, is that you might want to use the new AI technology of existing vendor. Now you have to update your contracts. And that’s one area that a lot of organizations aren’t paying attention to. So I think I’d leave with that advice. Take advantage of AI, but also identify the risks and mitigate those risks.

SD: It’s wonderful advice. Thank you. Thank you so much for that and for sharing your time and insight with us today. To our listeners, thank you for joining us. This is just the beginning. In our next episode, as Sheila already alluded to, we’ll dive more deeply into AI and accountability and specifically navigating legal risk and governance in AI and automated systems. Until then, let’s keep shaping the future responsibly.