Artificial Intelligence and Fiduciary Obligations (Part 1) — Key Risk Considerations
The rapid adoption of artificial intelligence tools across the financial services and fiduciary sectors presents significant opportunities, but it also introduces legal and regulatory risks that fiduciaries — whether individual trustees, corporate fiduciaries, banks, trust companies, or investment advisers — must carefully evaluate. As the SEC's Investor Advisory Committee has emphasized, compliance with an ethical framework for the use of AI is consistent with the fiduciary duties of advisers, including their affirmative duties of care, loyalty, honesty, and utmost good faith. [1] Below, we highlight four critical areas of risk at the intersection of AI and fiduciary responsibility.
Preservation of Confidentiality
Fiduciaries are legally obligated to safeguard the confidentiality of client information. The use of AI tools — particularly generative AI and cloud-based platforms — may expose nonpublic personal information to third-party systems, risking unauthorized disclosure. The Gramm-Leach-Bliley Act and its implementing regulations restrict financial institutions from disclosing consumers' nonpublic personal information and require the implementation of administrative, technical, and physical safeguards to ensure the security of customer information. [2] As NIST has recognized, AI systems can present new risks to privacy by allowing inference to identify individuals or previously private information about individuals. [3] State regulators are actively addressing these concerns. For example:
The New York Department of Financial Services has issued guidance clarifying that its cybersecurity regulation (23 NYCRR Part 500) requires covered entities to assess AI-related risks to nonpublic information and implement appropriate safeguards.
California’s Attorney General has issued a Legal Advisory emphasizing that the California Consumer Privacy Act (CCPA) applies to AI systems capable of outputting personal information, and that businesses must comply with the Act’s protections for personal and sensitive data when deploying AI tools.
Fiduciaries should evaluate whether inputting client data into AI systems could compromise these obligations.
Exercise of Discretion and Non-Delegation
A core tenet of fiduciary law is that discretionary authority must be exercised by the fiduciary and cannot be improperly delegated. Relying on AI-generated outputs — such as investment recommendations, trust administration decisions, or compliance assessments — without independent human review may constitute an improper delegation of fiduciary judgment. The SEC has noted that advisers should be mindful of the unique aspects of algorithm-based investment models, including the need for enhanced monitoring and risk-based reviews. [4] Federal financial regulators have consistently stated that AI outputs should inform staff decisions but should not be used as sole decision-making sources. [5] State fiduciary statutes reinforce these obligations:
New York’s Prudent Investor Act (EPTL § 11-2.3) requires trustees to “invest and manage property held in a fiduciary capacity in accordance with the prudent investor standard,” which mandates that professional fiduciaries “exercise such diligence in investing and managing assets as would customarily be exercised by prudent investors of discretion and intelligence having special investment skills.” The statute permits delegation of investment functions only if the trustee “exercises reasonable care, skill, and caution” in selecting agents, establishing scope, and periodically reviewing performance — delegating decisions to an AI system without such oversight could violate these requirements.
Similarly, California’s Uniform Prudent Investor Act (Probate Code §§ 16045-16054) requires trustees to “invest and manage trust assets as a prudent investor would, by considering the purposes, terms, distribution requirements, and other circumstances of the trust” and to “exercise reasonable care, skill, and caution.” California law permits delegation only where the trustee exercises prudence in selecting agents and periodically reviews the agent’s compliance — mere reliance on AI outputs without such diligence may expose fiduciaries to liability.
Security Measures
Fiduciaries must implement appropriate cybersecurity safeguards when deploying AI tools. AI can increase cyber threats by introducing vulnerabilities that allow attackers to evade detection or manipulate AI decisions. [6] NIST's AI Risk Management Framework identifies security and resilience as key characteristics of trustworthy AI systems, and recommends that organizations treat AI risks together with other critical risks, such as cybersecurity and privacy, to yield a more integrated risk management outcome. [7][8] State regulators have issued specific cybersecurity guidance addressing AI-related risks:
The New York Department of Financial Services issued an Industry Letter in October 2024 on “Cybersecurity Risks Arising from Artificial Intelligence,” highlighting threats including AI-enabled social engineering, AI-enhanced cyberattacks, theft of nonpublic information from AI systems, and supply chain vulnerabilities from AI vendors. The guidance emphasizes that multi-factor authentication, access controls, and risk assessments must account for AI-related threats.
California’s Attorney General has advised that businesses using AI must implement robust security measures to protect data from unauthorized access or breaches, and that violations of cybersecurity obligations may be actionable under the Unfair Competition Law.
Vendor Due Diligence
Many fiduciaries rely on third-party AI vendors rather than developing tools in-house. In June 2023, federal banking regulators issued final interagency guidance on third-party risk management, emphasizing that a banking organization's use of third parties does not diminish its responsibility to perform all activities in a safe and sound manner, in compliance with applicable laws and regulations. [9][10] This guidance covers the full lifecycle of third-party relationships — from planning and due diligence through ongoing monitoring and termination. [11] Fiduciaries should conduct rigorous due diligence on AI vendors, including assessments of data handling practices, security protocols, and the vendor's own compliance posture.
Additional Considerations for Professional Advisors
Attorneys serving as fiduciaries or advising fiduciary clients face additional ethical obligations when using AI tools. Bar associations across the country have issued guidance emphasizing that AI use must align with professional responsibility rules:
The New York State Bar Association’s Task Force on Artificial Intelligence has warned that AI must not compromise attorney-client privilege and that attorneys have an obligation to ensure paralegals and other employees handle AI properly. The Task Force determined that New York’s Rules of Professional Conduct provide guidance governing AI use but emphasized the need for ongoing attorney education to ensure proper handling of the technology. [12]
The State Bar of Georgia has released a Generative AI Toolkit emphasizing that attorneys must understand AI limitations, maintain human oversight of all AI-generated content, obtain client consent before inputting confidential information into AI systems, and verify all AI outputs for accuracy before relying on them. Georgia’s guidance makes clear that attorneys remain responsible for all content whether AI is involved or not, and that AI algorithms can reflect or perpetuate biases that attorneys must assess. [13]
While the Oklahoma Bar Association has not issued formal ethics guidance, Oklahoma’s highest criminal court has implemented rules requiring attorneys to review and verify all AI-generated legal documents, with potential sanctions — including contempt — for non-compliance. The court emphasized that attorneys remain responsible for any inaccuracies in AI-assisted filings, and that using AI without proofreading or further research demonstrates a lack of diligence. [14] Fiduciary counsel should establish clear internal policies on AI use, provide training on AI risks and verification procedures, and ensure compliance with both fiduciary standards and applicable rules of professional conduct.
Additionally, several states have codified human oversight requirements specifically for AI systems:
New York’s Acceptable Use of AI Technologies Policy requires that AI systems be explainable and interpretable, with assessments of decisions impacting individuals — including in financial contexts — subject to human review.
California’s Generative Artificial Intelligence Accountability Act (SB 896) mandates that covered entities using generative AI to communicate with persons regarding services must provide a disclaimer and information on how to contact a human employee.
Fiduciaries who adopt AI tools without addressing these interconnected risks face potential regulatory enforcement, civil liability, and reputational harm. We encourage all clients acting in a fiduciary capacity to consult with legal counsel before integrating AI into their operations.
Coming in Part 2: AI Risks in the Management and Administration of Digital Assets
In our next alert, we will examine the unique risks that AI presents in the fiduciary management and administration of digital assets, including cryptocurrencies, tokenized securities, and other blockchain-based holdings. Among other topics, Part 2 will address: the risk that AI-driven portfolio management tools may execute unauthorized or erroneous transactions involving digital assets due to algorithmic errors or flawed training data, potentially breaching fiduciary duties of care and prudence; the heightened cybersecurity vulnerabilities that arise when AI systems interact with digital wallets and private key infrastructure, where a single AI-related exploit could result in irreversible loss of assets; and the regulatory uncertainty surrounding AI-assisted compliance with evolving federal and state frameworks for digital asset custody, where over-reliance on automated tools may leave fiduciaries exposed to liability for failing to satisfy emerging custodial and reporting obligations. We encourage clients with digital asset fiduciary responsibilities to watch for that forthcoming publication.
For More Information
If you have questions or would like additional information, please contact Cherish De La Cruz (cherish.delacruz@pierferd.com), Tom Vincent (tom.vincent@pierferd.com), or your regular firm contact.
This publication and/or any linked publications herein do not constitute legal, accounting, or other professional advice or opinions on specific facts or matters and, accordingly, the author(s) and PierFerd assume no liability whatsoever in connection with its use. Pursuant to applicable rules of professional conduct, this publication may constitute Attorney Advertising. © 2026 Pierson Ferdinand LLP.
GAO-25-107197, ARTIFICIAL INTELLIGENCE: Use and Oversight in ...
AI Risks and Trustworthiness - AIRC - NIST AI Resource Center
GAO-25-107197, ARTIFICIAL INTELLIGENCE: Use and Oversight in ...
GAO-25-107197, ARTIFICIAL INTELLIGENCE: Use and Oversight in ...
NIST AI 100-1 Artificial Intelligence Risk Management Framework ...
NIST AI 100-1 Artificial Intelligence Risk Management Framework ...
Interagency Guidance on Third-Party Relationships: Risk Management ...
Interagency Guidance on Third-Party Relationships: Risk Management ...
Agencies Issue Final Guidance on Third-Party Risk Management | OCC
New York State Bar Association Task Force on Artificial Intelligence, Report and Recommendations (Apr. 2024)
State Bar of Georgia, Generative AI Toolkit for Lawyers (2024)
Oklahoma Court of Criminal Appeals, Rule Regarding Use of Artificial Intelligence in Filings (2024)