How to Get More out of AI Without Compromising Safety,
Friday, Jul 18, 2025

How to Get More out of AI Without Compromising Safety, Ethics or Professional Judgement

As artificial intelligence (AI) transforms professional services, accounting and audit firms face unprecedented challenges in implementing AI safely and ethically. In May, we ran an expert panel at Accountex London 2025, bringing together leading voices from technology, practice and AI governance to explore critical considerations for firms navigating this complex landscape.

Our AI panellists examined how firms can harness AI’s transformative potential while managing risks around data privacy, bias and professional standards.

Panelist Rachel Tattersall (Head of Platforms, Cooper Parry) addressed operational challenges and best practices from a firm perspective, while fellow panelist Raj Patel, CQF, AI Transformation Lead at Holistic AI provided expertise on AI governance and risk management frameworks.

Danielle Supkis Cheek (SVP, AI, Analytics and Assurance, Caseware) shared insights on practical implementation considerations, providing advice on how firms can harness AI’s transformative potential while managing risks around data privacy, bias and professional standards.

The standing-room-only audience came away with actionable insights on how to use compliant, ethical and effective AI systems that enhance rather than compromise professional judgment.

Here are the key insights from their Q&A session.

How enthusiastic are clients about AI being used by advisors within the firm?

According to Rachel Tattersall of Cooper Parry, “Clients appear to be across a spectrum when it comes to embracing AI. We have some clients who do not want us to use AI at all as any part of our service offering, but some clients actively ask about AI tools and seek collaboration on implementation. We’re starting to have more open conversations with new clients about the use of AI. Clients are coming to us and asking us what we are using while looking at how they can then employ AI in their ways of working.”

The appetite to discuss the technology and new ways of using it suggests growing acceptance and curiosity about AI applications in professional services, though adoption remains gradual.

Why is governance so critical for businesses deploying AI? How can businesses build ethical frameworks that balance AI efficiency with professional responsibility?

Raj Patel, AI Transformation Lead at Holistic AI, positioned governance as the foundation for successful AI deployment. He used a compelling analogy: “If AI is the engine of your business, governance is the brakes. Cars can only go quickly because they have brakes. You wouldn’t drive quickly if you didn’t have the ability to slow down and stop.”

Patel emphasised that those businesses not experimenting with AI risk falling behind competitively, while also stressing that rapid deployment must be balanced with risk management. “AI governance is the method and the mechanism that allows you to monitor risk, put controls and guardrails in place and deploy responsibly and effectively – all the while building confidence and trust in your AI deployment.”

He continued, “Effective AI governance requires cross-functional collaboration, ensuring data science teams communicate with compliance and risk teams while aligning with board-level company strategy. Most critically, governance builds trust – something that takes a long time to build, only seconds to break and forever to recover.”

How can firms protect client confidentiality when implementing AI, and what safeguards do you recommend firms implement to protect sensitive information?

According to Danielle Supkis Cheek of Caseware, data protection presents complex challenges for AI implementation in accounting. “With great power comes great responsibility: vendors need to successfully navigate giving their users enough power to achieve their AI goals but not actually allowing them to harm themselves.”

She described Caseware’s approach, which involves multiple layers of protection. Beyond securing and storing data safely, Caseware considers different use cases and data classifications. Client confidential data has strict usage rules, even for internal purposes, and firms typically maintain different permission levels for different clients. The risk profile of confidential data means internal use cases still require governance frameworks, and Danielle emphasised the importance of balancing platform flexibility with safeguards to prevent inappropriate use cases.

What practical advice would you give to firms looking to scale their AI usage beyond initial pilots?

For firms moving beyond experimentation, Rachel Tattersall of Cooper Parry recommended focusing on people and curiosity. For example, Cooper Parry conducted focus groups over twelve months to understand AI exploration across service lines and support teams, using insights to develop their AI policy.

As she explained, the firm’s approach includes creating best practice guidance with basic examples and use cases, establishing a central portal for sharing ideas and celebrating successes publicly to encourage adoption and recognition.

How is the EU AI Act shaping best practices for businesses?

According to Raj Patel of Holistic, the EU AI Act has significantly influenced AI governance approaches. The legislation has transformed client conversations from “should I govern?” To “how do I govern?” through the provision of tangible frameworks with deadlines and requirements. However, Patel emphasised that effective governance should stem from ethical and business considerations rather than mere compliance.

The EU AI Act requires risk management systems and provides clear categorisation – prohibited, high-risk, medium-risk and low-risk use cases. Notably, companies outside the EU are adopting these standards as their benchmark, suggesting the regulation’s influence extends beyond its geographical boundaries.

How can the AI trust gap be addressed?

Last year, the Harvard Business Review wrote about the AI trust gap, covering predictive machine learning and generative AI. Commonly cited concerns include hallucinations, disinformation, bias, safety and job loss. How can we address the AI trust gap?

The panel addressed several strategies for building trust in AI systems. Raj Patel emphasised a three-pronged approach: “people, processes and tooling.”

For people, organisations need learning programs that empower employees to use AI effectively and to recognise deployment opportunities, creating ownership rather than mere usage. Processes should include effective sandboxing and pathways from ideation to implementation. Tooling encompasses both governance solutions and internal mechanisms like AI committees that foster cross-business collaboration.

Rachel Tattersall highlighted accounting’s natural fit for AI adoption: “I actually think we’re in the perfect industry to explore the use of AI.” Her view was that the traditional review structure – where juniors perform work that managers and directors review while applying professional skepticism – creates natural safeguards against AI risks.

“AI will allow us to remove the lower value mundane tasks that none of us as accountants really enjoy doing,” she explained. This enables professionals to focus on technical aspects while juniors evolve into reviewers rather than just doers.

Danielle Supkis Cheek connected professional skepticism directly to AI reliability: “I believe professional skepticism means you should assume everything is a hallucination. That’s what professional skepticism means. You don’t believe anything that you see until you’ve done something to validate.”

She advocates prioritising both transparency and precision, while investing in systems that enable quick fact-checking rather than just marginally improving accuracy rates. “Fact checking will become the new drudgery,” she predicted, but emphasised this as an essential professional skill.

How can firms manage rapid change in AI implementation?

Danielle Supkis Cheek of Caseware emphasised the importance of having policies with safeguards around permissibility before scaling AI usage. Most firms have addressed confidential information concerns by choosing either open systems with restrictions on confidential data or closed systems that permit such data.

For change management, she recommended focusing on use cases that aren’t foreign to existing workflows, particularly helping early-career staff with tasks they typically struggle with. The value extends beyond time savings to quality improvement.

“I don’t think it’s only about time savings,” she explained, citing a Gartner study showing 70% of calculated AI time savings are lost to inefficient redeployment. “Instead, AI can help junior staff produce higher-quality first drafts of memos with better grammar and organisation, creating downstream benefits for reviewers.”

She identified long-form copywriting as low-hanging fruit, noting that accountants are typically better with numbers than writing, making this an ideal starting point that doesn’t introduce significant new risks.

Practical Recommendations for Small Practices

When addressing concerns from smaller accounting practices about AI governance costs, the panel offered practical guidance. Danielle Supkis Cheek noted that AI democratisation means smaller practices now have access to technology previously reserved for large organisations, with consumption-based pricing rather than large upfront investments.

For small firms, she recommended starting with the need to understand confidential information risks and to conduct due diligence on both products and data usage policies, as well as a warning about free versions of products, which often monitor data and user experience to shape product design. “If you are using the free version of anything, you are probably shaping the product’s capabilities yourself, effectively trading your data privacy for free access and, as a result, your data is at risk,” she warned, advocating for paid versions of products and reading the terms of service agreements thoroughly.

Rachel Tattersall commented on AI policy development and the need to make it practical and digestible. “We don’t want to recreate ‘War and Peace’ in terms of AI policy. It needs to get to the point, be digestible and easy for people to understand so they can put it into practice.”

Raj Patel recommended starting with sandbox environments for safe testing and focusing on business areas where AI will deliver clear value. He emphasised that smaller companies can implement manual AI governance initially, building foundational knowledge that will facilitate future scaling.

The Path Forward for AI Requires Careful Attention

The panel’s insights reveal that successful AI implementation in accounting requires a balance between innovation and professional responsibility. While the technology offers significant opportunities for efficiency and quality improvement, success depends on robust governance frameworks, transparent safeguards for client data and maintaining the professional skepticism that defines the accounting profession.

The message is clear: firms that fail to experiment with AI risk falling behind, but those that implement it without proper governance and trust-building measures risk far greater consequences. The path forward requires careful attention to people, processes and technology, with trust as the foundation for sustainable AI adoption.

The post How to Get More out of AI Without Compromising Safety, Ethics or Professional Judgement appeared first on Accounting Insight News.

------------
Read More
By: Simon Warren, Head of Global Solutions, Caseware
Title: How to Get More out of AI Without Compromising Safety, Ethics or Professional Judgement
Sourced From: www.accountex.co.uk/insight/2025/07/18/how-to-get-more-out-of-ai-without-compromising-safety-ethics-or-professional-judgement/
Published Date: Fri, 18 Jul 2025 09:01:22 +0000

Did you miss our previous article...
https://trendinginbusiness.business/finance/autumn-budget-2025-what-changes-might-be-on-the-horizon