Video: AI-Powered Automation: AWS & SS&C Blue Prism | Duration: 3583s | Summary: AI-Powered Automation: AWS & SS&C Blue Prism | Chapters: AI Evolution Introduction (0.7933885703034811s), Introducing AI Experts (100.98338857030349s), Technology's Dual Impact (168.78338857030346s), AI Adoption Reality (299.13340857030346s), Measuring AI Value (571.3183885703036s), AI Automation Applications (628.9783885703035s), AgentTech AI Reality (956.5234185703035s), AI Customer Service (1007.5133885703036s), AI Adoption Readiness (1244.5633885703035s), AI Adoption Barriers (1672.9083885703035s), AI Adoption Barriers (1800.9333885703034s), Organizational AI Readiness (1938.7832885703035s), Readiness for AI Adoption (2105.0034885703035s), Addressing AI Trust (2298.4033885703034s), Governance and Trustworthiness (2420.1883885703032s), Earning AI Trust (2617.4883885703034s), Governance and Innovation (2748.9433885703033s), Powerful AI Ecosystem (3110.0933885703034s), AI Benefits in Practice (3174.9732885703033s), Enterprise AI Implementation (3237.103388570303s), Concluding Thoughts (3380.2583885703034s)
Transcript for "AI-Powered Automation: AWS & SS&C Blue Prism": Hello, everyone, and welcome to today's webinar where we're gonna be exploring AI's constant evolution and then some of the challenges and opportunities that are present. Now there's been a lot of lessons learned in the past two years, but if we've learned one thing, it's really the potential of AI is not just limited to tinkering around the edges, making some incremental improvements to existing, but it's actually fundamentally helping us reimagine the way that work gets done. We just concluded some global research. And in that survey, we found that 84% of organizations said the potential of AI is really to completely disrupt existing business practices and bring about new ways of working. Now we've also learned in this journey that the road there can be very, very bumpy if it's not done carefully. Just a couple of weeks ago, The UK and The US Governments refused to add their names to the international agreement on artificial intelligence at a global summit in Paris. Now the reasons were quite different. The UK government cited concerns over national security and global governance. In contrast, The US cited concerns with too much regulation on AI could kill a transformative industry before it's even taken off. So the question that most organizations are asking right now is how to balance the incredible potential that AI offers whilst managing all the risks that are becoming apparent as we deploy it. I'm Natalie Keithley. I'm the VP of product marketing here at SS&C Blue Prism, and I'm delighted to be joined by a panel of experts today who are gonna shed some light on how to use AI in a practical, responsible, and impactful way across organizations. So first of all, a very warm welcome to Jason Baras, who's the chief technical and operating officer at Outsource Automation Solutions. Great to have you here, Jason. Also, a very warm welcome to Pavan Root, who is the principal partner sales solution, architect at AWS. Welcome. And finally, a warm welcome to Dan Terns, the APAC CTO for SS&C Blue Prism. So I'm just gonna start with you, Pavan. I I mean, we talked there a little bit about the potential that this technology offers. But where is it right now? Is it at a point where we're generating net new value, or is it still just kinda making it easier and faster to do things, that we've always done and generate existing value? Hey. Thanks, Nat. Yes. Indeed. Technology is currently at a very interesting point, you know, and where we're seeing both, doing both, you know, optimizing existing processes and also creating new value streams. Okay. So let me give you let me break it down for you. So it's basically, if you look at the existing value optimizations, okay, so we're seeing significant reduction, okay, in time and cost on current processes. Okay. Examples be, like, document processing, which used to take hours, now takes minute. You've got customer service, responses, times reduced by 60 to 70%. Even the developers are, you know, pretty happy with their core development cycles shortened by 30 to 40%. In terms of new value creations, we've this has been quite a game changer. For example, the real time multilingual, translation in the customer service. K. We've seen in manufacturing, the predictive maintenance, okay, with unprecedented accuracy. And we're also seeing complex pattern recognition in large datasets that humans couldn't process before that. K? So creating, entirely new business models as well, okay, where we've seen automated content creation and adoption. We have seen, predictive analysis for, you know, business decision making. Let me give you a real time example. You know, when we combined AWS, Petrochem AI capabilities with, robotic process automation, one of our customers in financial services not only accelerated their existing loan loan processing, okay, but also created a new risk assessment model that identifies patterns across unstructured data, which is adding a complete new value. So bottom line, if I look at it, right, we are seeing both optimization and innovation. The key is to choose the right use cases and implement proper card rails. That is very, very good advice because it is very much about the use case that you choose and and also the guardrails you you put around it. But, Dan, is is this something that you're seeing, and and where are you seeing the technology being used right now? I'm going to, both disagree. Oh, sorry. Both agree and maybe be a little bit controversial as well or provocative. So look. Absolutely. There there's there's plenty of examples of companies using Gen AI and AI more broadly to improve existing processes, optimize time reduction, cost reduction errors, etcetera, 100%. And, that's certainly the majority, but there's certainly, examples as well of, of new value creation. SS&C, we have our own. We've created new services and new products we've gone to market with around being able to ingest and aggregate and analyze data from, US labor statistics and what have you. I I I don't know all the details. So so, yes, it's done. But but here's the that's the the provocative part. I think I think it's fair to say that there's far there's far more examples of value not being created and not being added than that there are, or that the value is kind of illusory or ephemeral. You know, we've we've got all the breathless statements from analysts and the investor community and, you know, investor enthusiasm about, you know, inflection points in in humanity and trillions of dollars worth of, worth of economic, value being added. And it's just not showing up in any statistics yet yet. Okay? Either in terms of GDP growth or productivity growth or, you know, labor market, movements or weight. It's not. And and in fact, the US Census Bureau does a survey every two weeks that that questions companies about their use of AI, and they ask the question of whether AI is used to deliver goods and services, I e, the core activities of the organization. And it's at single digit levels, five to 6% in America, 6% in Canada. It's very low. And and I think that so I think that most of the adoption of Gen AR is around this sort of ad hoc copilot style q and a, you know, the coder saying, how do I write a better, SQL statement or the, the the salesperson saying, write me an executive summary or the HR person write me a job description, and they're it's it's slivers of savings and slivers of time. But whether organizations are able to roll that up into something that they can bank as tangible value. That's the that's the question. Or whether we get down to the pub earlier, on a Friday than we might otherwise have done. I think that's the, that's that's the big question. Right? So definitely using it. Certainly, lots of happy employees, but whether the companies are generating real tangible value from that is is another thing. And I think that as as as Papan said, you know, he was talking about analyzing massive datasets. And I think that's kind of the the the secret sauce, if you like. You have to be operating this stuff at scale. So whether that's talking about analyzing massive datasets, whether it's talking about doing not one transaction, but a thousand transactions or a thousand transactions a day. It's not one email, but all of my emails. That's where you start moving into the area of real tangible value. And I think that most organizations, I'm sure we'll get into this, and certainly most use cases, they're just not there yet. There's too many question marks. There's too many concerns, and there's too there's too much question about the trustworthiness of the answers for organizations to say, sure. Let's just automate it. And I think in some way, they look at it and say, the downside of me getting it wrong is worse than the upside of me getting it right. So till I can sort of ensure there's no downside, I'm gonna go slowly out. And that's why I think, you know, that we kind of see these sort of styles. So I think a lot of this stuff's potential for the future rather than, you know, widespread adoption across society at the moment. But I think we'll get there. Those were really a lot to unpack there, really, Dan. But I think, you know, just sort of summarizing what you were saying there. I think there's a question mark about whether or not organizations are actually getting to that next level of value at this point. And even if they are, how they're measuring it or if they're able to measure it right now. Now I I'm gonna ask you to hold that thought because we're gonna come back in a little while and talk about some of the challenges organizations are facing and and the readiness to do some of this stuff properly. But before I I do that, I wanna ask you, Jason, because you've been doing a lot of work with some of your customers in the area of deploying this technology, to support everyday business. Talk us through some of those use cases that that you're seeing organizations bring to the full. Yeah. Thanks, Natalie. So we we're seeing, so in terms of impact, we we're seeing quite a significant impact for both staff in an organization and the customers or consumers of that organization services. So to to elaborate a bit more on the use cases, basically, processes that could have could be automated if cognitive functions or human like interaction were available can actually now be automated. Okay? So in OES, we specialize in both automation and and, utilization of AI services. So, for example, clinical coding of health care health care episodes. So within a within a health care organization, be it, a hospital or a private health care company, Any any treatment or or, episode, for a patient is coded into into codes, and these are used for for for many things. So for funding, reimbursement from, from an insurance company, and, various other statistical analyses that allow us to identify, you know, patients at risk and things like that. So, yeah, it's a really important function, but it requires humans to analyze, the the actual clinical for the episode. You know, yes, some of these could be automated with intelligent automation and some basic rules, but what we're finding is that actually by utilizing AI, we can actually automate coding of a lot more episodes because the AI can actually, understand, what what's actually written in the clinical notes. Another area, another another process, for instance, that, we can now automate, agents in call centers. So lots of organizations have call centers, particularly councils, and we're working with a number of councils on this, where they've got call centers with agents. But we can actually use Agentic AI, which I know you're going to come to a bit later, and actually implement virtual agents within within a call center to actually pick up calls from humans. Okay? Now this this is a nice segue onto voice automation because, you know, that that's one of the use cases. So, you know, we deflect calls from a call center to a virgin virtual agent. So those virtual agents will have a conversation with a person on the end of the line. And I I've I've got to admit I was quite skeptical when, you know, we first dipped our toe into this technology. But but do you know what? I was I was absolutely staggered at how good it is. And, you know, this is proving hugely successful. It's proving quite popular amongst, our our council customers in local government. And the these these virtual agents can have the conversation with a human. They can then instruct traditional intelligent automation to take any relevant action. So you're doing both parts, of the process, and you're actually doing the end to end process, which which is great news, because the the, you know, the human gets the call gets answered instantly rather than sitting in a queue. The the the call is is dealt with. Everything's dealt with on that one call. It's all automated on the back end, so the experience is so much better. There's other benefits as well. So or other other use cases. So know your customer. So we we can now, review documentation, using AI to review to verify that documentation. We can verify applications as well. So AI can be utilized for that. Emails. So you got a contact center, or a help desk, and you get huge number of emails each day. We we can use AI to actually, review those emails, classify the emails, it can respond to the emails. So we can respond with AI generated responses, or it can pass it off to a human, for a human to handle. Or indeed, it can actually get intelligent automation to perform some sort of back end action to to process the request from from the email. And, also, you know, I talked about voice automation, but, you know, the other interaction you get, particularly through websites is, via chatbots. Okay. We've also all experienced those, in in various levels of capability. So we can now use avatars on websites to actually bring those chatbots to life. The the chatbots are more intelligent. We're actually working with some customers to implement this. And, yeah, the so the human sees, sees an avatar on a website. It's accurate. They're interacting with the avatar. The conversations are lifelike as if that avatar is a real person. So we're seeing huge changes in in that customer or consumer interaction with an organization. And, obviously, we're seeing big changes for staff within the organization as well. So, yeah, it's the the the I mean, there's lots of use cases. I could I could talk all day about use cases, but, there's some key examples that we're working on with customers at the moment. So some great insights there, Jason, into some of the ways this technology is being used. But I'm gonna stick with you for a minute because you mentioned the magic word at the moment, and that's Adgentec. It is the biggest buzzword that's out in the industry at the moment. But there's different definitions from what I can see. Now as I understand it, the the main definition of a GenTech is truly autonomous work where an AI agent is working and doing work and making decisions on behalf of people without people being involved. Now what is real and what isn't real in this space? Because are we really ready for autonomous decision making at this point? And is that really what organizations are doing with with the GenTech AI? Okay. So let let me, just talk about one of our customers that we're working with on this. So so at AWS, we're we're delivering agentic solutions to thorough counsel. Okay. So what this does is it, basically enables the complete resolution of an inquiry, within a single request. So, to put that into con context. So a resident phones up, I don't complaining that their bins weren't emptied. Okay? So that call gets picked up by, by a virtual agent. They have a real voice conversation with that virtual agent. And as I said, in my answer to the last question, these are stunningly realistic, these conversations. They are really, really good. We've got some recordings of conversations on our website, which demonstrate just how good it is. And so the customer phones up. They speak to a virtual agent. It's very realistic. And, the the the virtual agent handles their inquiry, kicks off all the back end activities that need to take place, will give the customer a response, And that's all done in one conversation. And, you know, that that resident's not sat there for half an hour waiting for a human to pick up the phone because there's a there's a queue of people waiting. It's just instant answer, instant conversation, instant resolution, and job done. So you can you can understand for anybody who's who's had to call an organization and sit in a queue, for I mean, you know, we've all been on the phone for thirty, forty, fifty minutes or even over an hour waiting to get get through to a human. You know, this is this is actually a game changer, both for for the customer experience and for the organization in terms of cost and customer satisfaction. So, yeah, we're talking about resolving that request in just one single interaction by combining these, agentic AI and, intelligent automation. So, you know, what used to and and, yeah, in days gone by, you'd you'd make a call, a person would would pick up the phone, you'd have a conversation with them, explain what you need. You'd then potentially be waiting days for a resolution because they'd then have to submit a request to someone else that then be picked up by a human. And the process is is quite quite slow and laggy and, yeah, the the the the the resident or customer is not getting feedback on what's happening. They don't know how long it's gonna take. This all changes. You know, we're changing lead times of days, down to minutes, because it's all done in one go. I mean, obviously, the bin is not gonna come out within two minutes and and and, you know, the the the bin law is not gonna come out in two minutes and empty their bin. But at least they know it's booked, and it's gonna happen. Okay? So, you know, using, it just provides huge benefits all around. And as I said earlier, these virtual agents are almost human like. You know, they're not clunky or frustrating for customers. They're not like the IVR systems we've all come to, to to to live with and hate. They're not like some of the chatbots we've all experienced or endured in in our time. You know, these are they're they're really slick, very impressive. Great. Thank you, Jason, for that. So it's really, really good examples there of of of how it's different and where it's actually been applied already. Dan, from your perspective, how realistic is it that in the near term, we're gonna see agent agents taking on more of the autonomous decision making and activity in organizations? Unfortunately, I have two answers now. I started with one answer. Now I've got two answers. Here's what I say. I think and it comes back to your definition, and it'll be interesting to see how, how the the term agentic AI may morph over time. But if you're saying that Agentic AI represents an agent being able to do things and leverage AI, leverage large language models, and complete some task without human interaction, I don't think there's any height to that at all. I think we've been doing that for ten, twelve, fifteen years. You know? I don't think there's really much significant difference, in that scenario than RPA, attended automation, unattended automation, or anything like that. I'll tell you, I joined, we presume, in 02/2018. I can tell you, in 02/2018, in our Customer Excellence Awards, there was a submission from a media organization that was automatically publishing, essentially, you know, well, actually, financial reports on companies' earnings. No human involvement, natural language processing rather than Gen a. I put that six, seven years ago now. Right? So so agentic as a mechanism saying no humans, it's not hype at all, and frankly, that's not much different to what we've been doing for years. If you take the definition of agentic AI, which is more a focus on autonomy, the agent being able to decide what to do without being told what to do, without being told that he is a predetermined course of action and deterministic rules that you have to follow, then that's a different sort of story. And that very much relies much more upon the large language model piece and the reasoning piece and the thing that says, I've seen similar scenarios like this before. They did it this way. Let's do it this way. I think there's real hype to that. I think that, the idea of that of that agent deciding what to do without having any any predefined rules to follow will be if it happens in the short term, it'll be constrained to a a very narrow band, I would say. And and, you know, I I and I think it comes back to the same sort of same sort of thing with large language models and and Gen AI at the moment. We well, I wanna say that that as I said before, the number of use cases are very small, and a large part of that is hallucinations. A large part of that is somebody not being able to trust that the large language model will come up with the right information. I saw a thing in the in the papers the other day or the media. Apple News uses AI to automatically summarize the BBC newspaper articles and publish them to Apple News, and they got in big trouble just in January because they were getting it entirely wrong. I don't know if you guys saw it. It's funny. Right? They one of the one of the examples was it was three three examples. One of them was that they that they'd announced Luke Littler had won the World Darts Championships before they played. Now if they know something that we we can get on this for somebody, they they announced that Rafael Nadal had come out as gay. You know? So now these are not earth shattering, but they got it 100% wrong. 100% wrong. And if that's the state of if that's the state of doing something as simple as summarizing a news report, I think we're a long way from saying, sure. You decide what to do next. I don't think companies are gonna be prepared to take that risk for quite a while. That's that's one aspect to it. I think the second aspect we have to be aware of is just the regulatory frameworks. You mentioned it before, Nat. You know, the regulatory frameworks that are being, promoted by governments or pushed by not promoted, demanded by governments that really do kind of push a primacy of human oversight and human control. And in Australia in particular, there was a debacle over this robo debt scandal, literally a scandal, royal commission about it. And one of the, and and one of the very, you know, kind of, what do I say, sort of prominent findings and recommendations from the royal commission was that humans have to be the the last thing to press the button. They've gotta be the ones making that decision. We're not gonna be in a situation where a robot decides to knock my bank account automatically. So I think those things are gonna constrain autonomy. So hype there. But getting humans out of a process and getting large language models and AI doing it from end to end, 100%, but there's nothing particularly new about that. Yeah. Those are really good insights actually in just in terms of the scope, the breadth, the depth of what this technology is gonna be taking over on on our behalf, which which which raises some some really significant questions around the readiness. Now you touched on a few there, like like like governance. But but, Kevin, I'm gonna turn to you here because what are you seeing in terms of organization's general state of readiness to take this technology and and adopt it in a way that is is responsible but impactful in the same token? Yeah. I think, based on our customer interactions, right, and I think we have got 2024 was a lot about Jenny and I. Okay? We basically looked at different organizations, and I would really want under readiness. Okay? If if you look at it, I would really want to break down that in three segments. You know? Let's look at large enterprise, mid market, and small and medium businesses. Okay? So for large enterprises, again, they're going to be traditionally slow on this one. Okay? They have got dedicated AIML teams with them. They are running pilots. They are aiming towards a clear governance framework. Okay? And they're actively investing in infrastructure. That's that's about it where they're playing with. The mid market, I would say, is bit better in terms of, like, a percentage perspective. They're actually experimenting with specific use cases. They have got limited internal, expertise, so they're relying a lot on, partners, and other ecosystem to basically support them. They've started to develop the AI strategies. Okay. Again, exploring a lot of partnerships. That's the key theme that we're seeing in mid markets. Whereas in small and medium businesses, it's it's it's a mixed bag there. So they're pretty much because they're constrained with resources and expertise, they're pretty much looking at turnkey solutions, you know, and they're focusing on something that gives immediate return on investment. So so it's it's like I mentioned. Right? It is varied depending upon segments, okay, from an organizational readiness perspective. And in terms of the key adoption barriers, let me break it down into four main categories. Okay? First, obviously, is the technical barrier. Okay? And it all boils down to the data. Okay? So it would be Noble AI, machine learning, or Gen AI. Okay? Data is the king over here. Okay? And we have seen, you know, purely from data readiness perspective and the quality issues of the data, that's a massive barrier. Okay? Secondly is also integration with legacy system. You know, we've just spoken about, you know, the third council, it's an experience. So just imagine if the legacy systems don't have the integration APIs, which the agents or the agent AI agents can call in. Okay? We can't basically execute that automation. Okay? We also have got lot of limitation in terms of infrastructure, okay, where we're seeing, you know, the companies are not, ready with the core infrastructure that would support their Gen AI journey. Then the second category, I would say, is the organizational barriers, okay, which is basically they don't have a clear AI strategy. They've got budget constraints. Okay. They are having a lot of riskovers in the regulated industries, for for example, FSI or health care. We're seeing that as a massive barrier. Third one is around, the culture people and the cultural barrier perspective. You know, at the top of the case is fear of job displacement. You know? If agents take my if agents are doing this, what am I doing then? Okay. So that is a big conversation which basically is that generally, I, just like any other technology, is not there to take a job. I mean, like, divide thirty, forty years back, fifty years back, when computers came in, everyone thought they'll lose a job. You know? I don't think that's gonna happen. And the similar cycle of, the, you know, thought of change is coming in. There's already skills gap as well. Okay? Because this is so new, lot of people are not trained on what to do in GenAI. Okay? So that's massively coming from the, people perspective. Sorry. There's one more thing. It's a trust issue as well. Okay. We're finding that, you know, they're not people like Dan mentioned. Right? How many they will actually going to trust a system which is going to give you, two different answers to the same question? You know? At this moment, if you look at large language models, if they're not trained correctly with your organizational data, it is bound to have those hallucinations. So there's a trust issues there. And the lastly is the governance barrier. You know, let me put that. Like, there are a lot of data privacy concerns. Okay. There are a lot of lack of clear policies. There are regulatory compliance uncertainties around, you know, what exactly, you know, Gen AI regulations are going to be. You know, you just mentioned UK, US, and then start of it. They basically both had different laws. And on top of that, there are security considerations as well. You know, what happens if my data basically goes, into the public LLM? Is it going to be also used by other organizations? You know, all of that is becoming a big adoption, you know, barriers. So just to summarize, I think in my experience, you know, success comes with the organizations which are starting small but are thinking big. They're only focusing on specific business outcomes. Okay? They are investing in people and training. They're building governance guardrails early in the entire process. Okay? And they're partnering with experienced providers. And I think we've we've seen that as a big pattern. But the good news is that we're seeing increased maturity in organization readiness quarter on quarter. Now particularly as more success story emerges and implementation frameworks become more standardized, more use cases are coming up, okay, I think the confidence in the technology is going to increase. Thanks for that. So this is a pretty significant challenge for organizations in terms of the readiness that they've got to overcome. It's the technology level and the data level. It's at the organization level, the people and the trust issues, and, of course, the governance piece that you honed in on on there. Jason, what are you seeing? Are are similar sort of things that are impacting the overall readiness, or are you seeing anything different? Yeah. So I think I'm gonna reiterate probably some of the stuff that Pavan has just said. But, also, yeah, a few other observations that we we've come across within AWS. So, typically, organizational readiness for this technology, I'd say, generally, it's quite low at the moment. Now yeah, I'm I'm not saying the learning curve isn't there as it isn't being scaled rapidly. But at the moment, I think most organizations aren't really ready for this. In in particular, I mean, most of the organizations we work with, they could get huge benefits just from just from bog standard automation, without AI. So, you know, organizations, they they, you know, in some some respects, they probably need to learn to walk before they can run. But, you know, just coming on to more specifically about, AI and readiness for AI. So, I mean, alone, a lack of knowledge or awareness of what can be done, kind of restricts creativity. I mean, it's the same with AI and automation, actually. We see the same. They don't understand what the art of the possible is. They don't know that. So that's where we help the customers, and we take them on that learning curve. Conservatism of of key stakeholders, when it comes to the innovation. You know, there's a there's an element of skepticism there. Is it really gonna gonna add value? We've all we've all seen the horror stories of organizations plowing millions into AI and getting nothing out of it. And it's important that that customers go on that journey in a in a structured manner. And Pavan touched on this as well around, you know, starting small and then growing and scaling. And it's really important that you learn you do learn to walk before you run. Standardization within organizations. So standardized processes. Now you would have thought if you went into a payroll team, they'd be doing everything exactly the same. Every person in that team would be doing the same thing. We've been in organizations where, you know, we map out a journey or a process, and we find that actually, when it when it comes to implementing that, the the everybody in the team is like, no. We don't do it that way. And people are doing it differently. So it's really important before you, do any sort of automation, whether it would be with intelligent automation or standardized. And this becomes, less common within large teams. Okay? Regarding deployment, obviously, there's a high level of technical acumen required. Experience in delivering, intelligent automation. So sorry. Artificial intelligence solutions is key. So, you know, we've got into organizations. I mean, one in particular, I'm not gonna mention who they are, obviously, where they they had another well known consultancy in there, doing some journey maps for conversations. And, when we actually reviewed them, they they went next to no use at all because the the consultancy really had no understanding of how, a a a GenTIC AI should work, how those journey journey maps should be, documented, and described, and, you know or or even how, and I'm gonna touch on what Dan said, autonomy, for the for the virtual agent as well-to-do its job. So they they didn't have the understanding. It's really important that, you know, that that, the people involved in in the program, do have do have the the relevant experience, and we're not seeing that in organizations. So another thing and this applies both to AI and intelligent automation. Staff need to know how to interact with with any sort of automated solution. Okay? So sometimes you you still need a human in the loop. Sometimes you still need human handoffs. They all need to be, you know, consistent, and and the staff in an organization need to know, exactly how the technology is working and what it does for them and how to interact with that technology and work alongside it. Okay? Because these, whether it be intelligent automation or virtual agents, or any sort of agentic AI, you know, that that technology is working alongside humans within an organization. Okay? And there needs to be collaboration between the two. Finally, I'd say, touch on what Pavan said, which is that that level of trust and governance. I'm gonna touch on governance in a bit as well because, that's an important aspect for us at AWS. But, yeah, that skepticism and and level of trust in the AI doing it right and not coming up with, as Pavan said, two two answers for the same question, there is that that that can be a barrier as well. So, you know, organization is ready. I think, some of them are ready. Most of them aren't ready, but most of them are able to get ready. That's a very, very good point. And I think we've seen this in previous cycles with different technologies. The adoption happens at different rates. Right? And and it and it really comes down to the organization itself in terms of where they see potential and and and how much they're prepared to invest. But one of the things you said there was really key and that it's not the people, the the the people in the loop throughout all of this, which which I think becomes pretty relevant. I'm gonna bring Dan in here because, Dan, you touched on so many things when we were talking about GenTech. And one of the things that you highlighted was when AI gets it wrong and gets it fundamentally wrong and the impact that that then has on on trust. So as we move towards more autonomous work where this technology is taking on more of those decisions on our behalf, what are organizations gonna have to do to address that trust issue, Dan? Probably not one answer. Look. Let me give you this let me give you this little sort of anecdote. As you know, and some of the audience as well, SS&C have maybe a dozen or so use cases of Gen AI using large language models that we have internally, that we use for our our core business. Right? And and we describe these use cases to our, you know, SS&C Blue Prism prospects and customers and, yeah, targets and talk to them about opportunities for GenAI and stuff. And everyone just want how did you do it? You know? What accuracy are you getting for x, y, and zed? How do you do it? How do you ensure? What they're what they're a 100% excited about is to hear someone's done it and prepared to automate it and prepared to say we're confident in getting it right because because nobody really knows how to do it. So that's what people are excited about. And and what it fundamentally comes down to is the question of governance. You know? And I I see governance and, you know, sort of the innovation of large language models as kind of two sides of the same coin. Right? I think the the lack of adoption is very much a facet of the risk that companies see themselves facing, and the risk that they're facing is very much a facet of the governance or guardrails that they can put put around the large language models. And I don't think and we don't. You know? SS&C, we don't, get a 100% accuracy out of LLMs when we ask it a question. But what we're able to do, and I think this is the key, is to be is to identify and then intercept risky transactions, you know, the the false positives as it were, the false negatives as it were. That's what's gonna kill you. And, you know, you can be 95% accurate, but if 5% of your transactions are done wrong, companies aren't gonna do it. So how do I intercept those 5% of transactions? If you can do that, then you can have confidence in it. And that that, you know, that is not just the hallucination, the output, but it also goes to the question of, you know, the prompts, the inputs, like, personal identifiable information, confidential information being passed out, copyright information, handling all of that. So so it really comes down to how do you put those guardrails in place, how do you craft intelligent prompts that minimize the potential for hallucinations, but also can identify where that potential exists and stop it and pass it out to a human and give that human that higher value activity to say, what do you think? That's when you can actually that's when you can actually say, I've got confidence that the bad stuff is not gonna sneak through the the the the traps. And until you can do that, you kind of have to have a human stand in the middle of every Gen AI interaction. And if you have a human standing in the middle of it every Gen AI interaction, you're never gonna have automation. You're probably not gonna be focusing on goods and services, you know, going back to my earlier statistic, and you're not gonna get autonomy. So you can solve for that, but it really comes down to that question of governance. That is what's gonna give you the trustworthiness. You know? And you sort of you look at if I can, you know, throw over a van, you know, you look at what people like AWS are doing. They're sort of building their bedrock guardrails that, you know, has intelligence in place that says, you know, when we started with GenAI, it's like you ask a question, you get an answer. Okay? And you either rely on the answer or you don't. Now it's a case of you ask a question, you run it through this governance pipeline, you sort of say, was it a good question, was it a bad question, how do we rate the response, do we have high confidence, low confidence, what's the mathematical algorithm, and you get more, well, tools, I guess, to to, know how to intercept those risky behaviors. That's, in a nutshell, this governance space, and that's what's gonna give you trustworthiness. Really, some some great thoughts there. I thought there's one thing I'll say. It's a it's a slew of different activities that in total get that confidence. Yeah. So, Pavan, I think, Dan, threw the challenge over to the you there then. Yeah. How do you ensure trust and build that trust? Okay. I think what Dan alluded to the fact is is around the governance side of things. And I think and another big important factor is, you know, implementing what we call human in the loop validation at a critical point. You know? I think those two are absolutely, you know, the foundation pillars of getting and earning the trust. But to if you are able to summarize that, some of the key success factor that we are seeing is around how to keep the transparency and how decisions are made. Okay? Can we basically have a clear accountability structure? You know? Can we regularly validate the outcomes? Do we have proper compliance monitoring systems in place, which basically, you know, in form of a dashboard, okay, gives us something called the trust code. You know? How do we then go ahead and implement, robust feedback mechanism? Okay. I'll give you an example. Right? I think this is what we have done at one of the financial customer service where we actually build a trust, score dashboard that shows real time accuracy, decision patterns, validation metrics of all the automated processes. And what does help is that the transparency help both the employees and the customers to trust in the system. You know? And so the bottom line, the key message is that trust isn't automatically granted. You know? It needs to be earned through consistent performance, transparency, and above all, proper governance. You know? That's how I would want to summarize, and that's the key bits of earning trust in this quite, you know, I don't want to say chaotic, but it can really get chaotic, atmosphere at this moment in Gen AI. Thank you for that. I heard a great quote the other day, at a conference, and then quote was, cool ideas that can't be explained aren't cool ideas. And I think that just sums up perfectly what you were talking about there in terms of transparency. Jason, gonna tend to you because you touched on governance a little bit earlier. But but how are you seeing organizations balance the the the these two opposing objectives of innovation versus the governance to manage the risk around that innovation? Mhmm. It's a it's a really interesting subject, actually, and something we've we've particularly focused on in AWS because we feel it's so important. I'm actually gonna give you a bit of analogy bit of an analogy, actually. So I I live on a on a country lane, just heading out of town and right next to a humpback bridge. Now, it was two way humpback bridge. You know, nothing unusual about it. People come out of town out of the 30 limit. They see the 60 limit. They get all excited, quiet country lane. Let's put our foot down. Or there's a humpback bridge. Let's give the kids in the back a bit of excitement, you know, as we go over the bridge. So they go blasting over the bridge. But they haven't done their their checks and their due diligence on the road and realized that it's actually a giveaway after the bridge. It's a t junction. So you have to go left or right and actually over the bridge. You've got a field, which is about a a three meter drop down into the field. And, yeah, it it's unbelievable some of the, things we've seen. But, you know, we we get around over two dozen cars go in that field every year. And, you know but thousands of cars get it right. Tens of thousands of cars probably get it right. And, you know, that to me is not dissimilar to AI. I I mean, it's it's exciting. It's sexy. People get excited about it. They see it, and they just wanna put their foot down and go with it. And, you know, as I think it was Dan talking about examples of of new stories, being generated. And I've heard similar stories as as well of, slide decks being generated by execs using, ChatGPT and, you know, all sorts of weird and wonderful things appearing in them. And, you know, that that's the exciting driver coming down our lane and not realizing there's a t junction on the other side of the Humpback Bridge and going straight in the field. Yeah. It's an absolute car crash. It literally. And, so we've got to be absolutely careful with AI that we don't get excited. Go don't organizations don't go running off and doing doing it because it's all sexy and great, that they do it with with the relevant controls in place and consider the solution they're putting in. And that that for for us at OES is really, you know, boils down to, how how is AI being used to process data? What data is it processing? What decisions is it making? How is it how how how is that data being handled technically? I mean, we talk about security access, location of the data, things like that. You know, you go chuck in personal information into a public publicly accessible LLM, And, you know, you don't know where it's gonna go. You don't know who has access to it potentially. So, you know, we we address this by design in our solution. So if you would take our clinical coding service that that we've developed, we're delivering a separate AI instance for each customer. It's all UK hosted for UK customers. It's the the the instance is managed only by OAS. We don't subcontract to anybody else. There's no external AI used. It's using our it's using a model we've built internally. We've put a feedback loop in place. So think of AI. It's another person in your team. Why not consider it another per a person in your team? If you bring a newbie into your team, you you train them. Great. You show them what to do. Great. But you don't just let them crack on with it and don't check them. You need to be checking what's going on, what what sort of answers they're giving you. And you can do the same with AI, and you can you can check it. You can quality check what it's doing. You can You can check the the accuracy rates it's giving you, and you can have a feedback loop in place to ensure that it improves just as you would a person. Elimination of hallucinations, that's really key for us, you know, with clinical coding. You you don't want to start overcoding stuff. That could have all sorts of implications. PII. Minimize the PII you you you're sending off to a model. We actually strip all PII, in that service. It's not needed for for coding episodes. You want to know, you know, what procedures took place, what the symptoms were, what comorbidities were. You don't care whether it's it's a male or a female or what age they are or what date their birthday is. And equally, by by eliminating PII from our model, from submission to our model, we're not retaining any PII in the service either. So, you know, we we're taking a really safe approach to how we do this, but at the same time, giving that AI the ability to autonomously, code episodes for for customers, for for health care customers. And and, you know, it's that governance is really key. Thank you for that, Jason. I I I think at the end of the day, it's it's about being responsible and making sure you understand exactly what is going on, exactly what these models are doing, and and and put the right level of of guardrails around it. So, in thirty seconds, I'm gonna ask each of you to give a high level overview of what you're doing to support organizations in this journey to both get the most out of the AI, but also do so in a way that that maintains that balance. So I'm gonna start with with you, Pavan. How are you doing it at AWS? Yes. Thirty seconds. Okay. I'll be quick. So what we are doing is creating a powerful ecosystem, you know, that combines AWS gate, a great, Gen AI services like Petroq, very secure cloud infrastructure that we already have, along with, the SS&C Blue Prism intelligent automation platform. You know? Together, what we're doing is providing prebuilt industry specific solutions, ensuring, secure and covered AI implementation, maintaining human oversight while maximizing automation benefits, okay, and, and offering comprehensive training and change management support. You know? So this partnership helps organizations navigate their AI and automation journey safely, securely, and effectively, balancing both innovation with governance and speed with reliability. So we're just not providing technology. We are delivering complete solutions with necessary guardrails and a supported structure. So that's what we're doing. Well done. Thirty seconds. Yeah. Jason, how about you at OAS? Well, if I could repeat exactly what Pavan just said, it would be that's what I do. I mean, to be honest, yeah, we're we're delivering, you know, the the the virtual agents within public sector. We're delivering AI for for health care organizations. You know, we're we're transforming the experience of of residents. We're transforming efficiencies of the organizations. You they're they're seeing, lower costs within the organizations. You know, we're turning lead times or waiting times of days into minutes. It's all really positive what we're what we're delivering for our customers. You know, we're our focus is delivering maximum benefits at minimum cost, and AI is helping us do that for customers. Thank you so much, Dan. How about it? SS&C? I got the final word. Do I? Wonderful. So I like to say that that GenAI or large language model, it's a feature, not a product. Okay? It's it's part of a wider solution. In effect, you have to surround that large language model with a lot of things to take it from being a nice research project that looks kind of cool in the lab to being something that really can be used as an operational real world solution for a large organization. So so on the one hand, we provide a lot of that tech stack, from orchestration and automation and human in the loop and visibility and and so on. So there's a tech stack piece. We we provide some level of governance, largely coming from our own experiences in building a governance layer that SS&C can leverage for our own Gen AI use cases and then productizing that and saying, here's something that's fit for purpose for the market. So there's a second layer. We recognize that a lot of this stuff is our best practices. And one of the things that SS&C Blue Prism as a company was able to leverage, right from the outset, was recognizing the need for best practices and building that into what we call our RON, the robotic operating model. So we've we've transitioned that now, again, leveraging our experience in AI into an enterprise AI operating model and helping people see what are the best practices, what are the pitfalls. And, again, leveraging that real world experience that SS&C has, not just being able to build a technology and, you know, write an application like SS&C Blue Prism, but actually using it in the world of financial services, which at the end of the day is our is our core is our core business. And that is like Goldust saying, we've actually done it, and you can learn from us. And as I say, people are very excited to sort of hear that because everybody's willing you mentioned before about sort of, organizational readiness. I think everyone's willing but don't quite know how to do it and want that, that that hand to hold on that journey. Thank you very much for that. So this has been a phenomenal conversation with lots of different views, across the board. Some Some really, really great insights for organizations in terms of how to to unlock this this potential and do so in a in a way that makes sense to the organization. I want to thank all of you for your participation. Pavan, thank you very much. Pleasure. Jason and, of course, Dan. It's it's been a great conversation. And, I'll leave you with a quote from Albert Einstein, which is learn from yesterday, live for today, hope for tomorrow. And that's what we're gonna do with this technology. Thank you very much, everyone. Thank you. Thank you, Natalie. Thank you. Thanks, Natalie. Bye.