“We need a bride wide enough for a tank tread”
AI is not so great at interpreting some prompts and business AI won’t protect you on the security side. The AI security setup needs to be intentional, complete, capable, robust, able to carry the weight of the risks.
AI Provider Trust:
As ABL professionals, you’re no stranger to risk management. As artificial intelligence (Al) integrates into financial workflows, new AI Risks in ABL emerge. Some that go beyond collateral values and loan covenants or data encryption on the IT side. The question is no longer “Can we trust the borrower?” The new question is “Can we trust the Al provider and the security of our interaction with it?” We need to think about how to protect both the borrower’s data and AI results.
Let’s break down the key Al security risks relevant to ABL, and what you can do to mitigate them.
Are The AI Providers Trustworthy?
Al is not a monolith of ChatGPT only. Providers range from tech giants (OpenAl, Microsoft, Google, Meta, Apple, Amazon, Oracle, IBM, Anthropic, Mistral, xAI-Grok) to open-source projects (Falcon, Bloom, StableLM, Vicuna, DeepSeek). There are others and there are more coming. Each has its own approach to data privacy, security, and compliance.
Why You Trust or Don’t Trust:
Think of how you use, don’t use, trust, or don’t trust social media. Your broker accounts for 401K, IRAs, etc., the car brand that you drive or avoid purchasing. Any names that you avoid? Why? Bad experience or reputation and why do you believe the reputation?
Tech Trust?:
Surveys point to Apple as a trusted cell phone provider for data security and some have things to say (good and bad) about Google, Facebook, X, etc. for monetizing your data on social media. In recent weeks (July 2025), there was an AI that likened itself to Hitler. One very well known AI developer believes that all of the world’s energy should power AI. Other voices have some alarming opinions about AI taking over jobs and industries. Some of the “Tech Bros” seem to be willing to throw money at the liability to make it go away. A lack of the above noise and posturing promotes more trust in a given provider. But overall, trust is important.
Data Should NOT Be Reused by the AI (the purpose of this Blog):
Based on these things, can the AI use your data for others? Can the AI use your analysis to paint a picture of a specific company and then monetize the information? Valid concerns that include elements of trust and reasons to be more careful with sharing business or other data to an AI.
“Into the wild” = Key Risk:
When you use Al tools, whether for financial analysis, document analysis, portfolio monitoring, image enhancements, blogging, email, or borrower due diligence, etc., then sensitive data may be processed on external servers, sometimes outside your organization’s direct control. Is it shared? What does it contain that could be used by others? We can’t have others using the data or making inferences from the data.
Inferences / Derivations From Data:
Think of making inferences based on limited data.
Example 1 – Situational: The boss’s jacket is on the coat rack, she must be in the building… and her car is in the parking lot. Is the boss probably in the building?
Example 2 – Numbers: Suppose a company had $63B of revenues in 2024. We could narrow that down with a web-search or AI search to Pfizer, Roche, Shein, IBM, etc. But now add just a tiny bit more information such as the headquarters location, the NAICS code, the type of products (inventory report) etc. and you can derive who the revenues belong to.
The data must be protected to prevent inferences by the AI.
Institutional Risks with AI:
Beyond the Business Unit:
There are a few risks to consider. We might not see the risk, or we might fail to account for some risks. But back to what we do know, there are a few important risks to eliminate.
Reputation: A data breach or Al mishap can erode client trust and damage your brand or Corporate reputation. Banks have had some issues in the past (fake accounts, drug money laundering, LIBOR price fixing, etc.) that have resulted in huge fines and lending growth restrictions. Remember Arthur Anderson and Enron / Global Crossing or recently the audit firm BF Borgers? Add on-the civil lawsuits for anyone damaged by those actions. Borrower data must not leak out of the institutions.
Legal/Legislative: Regulations like GLBA, RFPA, GDPR, CCPA, and the Bank Secrecy Act (all noted in more detail below) impose strict requirements on how you handle and protect customer data. Not an option, not to be overlooked, and they are helping us to be protective of data.
Privacy: ABL deals with highly sensitive Personally Identifiable Information (PII) such as bank account numbers, SSNs, business financials, business segment and product analysis, field exam report details, and sometimes even health data (HIPAA / Personal Health Information). In the case of bank cards, the payment card information (PCI) must be protected.
Decision Making Errors From Results: The AI world uses the word “Hallucinations” when the AI produces results and states things in a factual way, but there is no evidence to support the conclusions of the AI results. Those are not hard to spot, but a similar risk from bad prompts can cause the results to be incorrect. Words like “increase”, “decrease”, “improve” and “decline” can get reversed from the data layout or prior prompt rows may cause a bad setup with incorrect results. If you make decisions on these findings, you can make bad decisions or have legal liability.
Financial: Breaches can lead to regulatory fines, lawsuits, and direct financial losses.
Damages: Unauthorized access or data leaks can have far-reaching consequences beyond the cost of figuring out how to patch it and notifying customers. Reputational harm and loss of business along with the monetary fines and civil action costs would all add up to greater total costs.
Regulatory Compliance: The Non-Negotiables:
Generally regulations are consumer oriented, but they don’t necessarily let lenders and accountant and lawyers off the hook for the exposure of customer or vendor list data. Private corporate performance and the unauthorized release of data need to be protected. Although not a direct violation of these rules, the fallout listed account names or financial data leaks are easy to imagine.
Gramm-Leach-Bliley Act (GLBA) & Safeguards Rule: Mandate robust security measures for customer data.
Right to Financial Privacy Act of 1978 (RFPA): Protects customer records from improper disclosure.
General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA): Impose strict privacy and data handling requirements, including the right to be forgotten and data minimization.
Bank Secrecy Act: Penalizes wrongful disclosure of suspicious activity reports.
Health Insurance Portability and Accountability Act (HIPAA): Private Health Information (PHI) is protected under HIPAA and must be protected in transit and at rest. Sharing agreements by and between healthcare practitioners are common, but the data is not public and must be protected.
Al Security Settings – Public VS. Private Data:
Al services offer different privacy modes:
Public: Data may be used to train future models (risk of exposure is unacceptable). This is a hard NO in the setup.
Private: Data is isolated, not used for training, and may have stricter access controls. This is the desired setup.
Server/Request Settings: Where and how your data is processed matters—ensure you understand the default settings and how the non-public sharing options are handled. These public and private (preferred) options are indeed meant to be enabled at the server level in the account portal. Elevated services are also available in some levels of the services (often the highest levels of cost).
BUT: Do you trust these AI providers (e.g., Apple, vs IBM, Microsoft vs Amazon, vs X=AI, etc.)? Even with those controls, do you trust them to not secretly train other models?
Protecting Sensitive Data Sent to the AI: Techniques and Tools (data loss prevention – DLP):
Hiding the data using different techniques is trickier than just using Excel or simple tools. It usually takes some programming to do this.
Settings in the AI:
The AI tools such as ChatGPT have control panel settings to keep the data private and not allowing it to be shared with the public or not used to help others with data, prompts or analytics (noted above).
Preventing Prompt Theft:
In some cases the prompts can take hours and even days to craft. For example, 1,000 hours of developing audit report prompts is not enough. A “Prompt Injection” is a request to the AI to reveal the prompt used for the analysis. This is your IP and hard work being extracted by a user and steps are needed to prevent prompt theft.
Obfuscation Techniques:
In general, “Obfuscation” is the act of hiding the data to make it unreadable or to not make sense.
Encryption: Scrambles data, requiring a software or hardware key to unlock. The AI would need the decryption key and this method alone is not quite practical because the decrypted data is now readable by the AI.
Data Masking / Anonymization: Replace real data with fictional values for testing or analysis. For example, the AR Concentrations customer IBM becomes Customer 1 and when the results come back for Customer 1, then it gets swapped-back to IBM name on the local computer.
Pseudonymization / Tokenization: Replaces data with meaningless tokens. This is a more complex version of Data Masking to randomize stings of text and numbers instead of the original names. Substitute identifiers with tokens, making re-identification difficult. Those tokens could be {wX5hsb42&S72n3B29H2sLwA73#s8*A} and then that gets swapped-back to “IBM” locally. Is this necessary compared to Data Masking? Probably not.
Generalization: Broaden data categories (e.g., Revenues $55B – $65B).
Swapping/Perturbation: Shuffle or slightly alter data to break direct links to individual accounts. Swap zip codes, address parts, etc.
Differential Privacy: Add statistical “noise” to protect individual data points.
Federated Learning: Models learn from decentralized data from many users (e.g., on local servers), so raw data never leaves your environment. This is basically a local learning model that has known parameters and then data from many sources to feed those parameters. Only the data is transmitted, not the parameters. It is a private large language model (LLM) based on lots of data. Open AI (the ChatGPT company) has new open source code base options that run on servers or even a desktop and they can be trained for this sort of thing. Newer desktop chips and lower-end server chips will have some AI processing power now.
Imagine using this to learn a style of writing to discuss financial results based on analysis and comments from hundreds of analysts. Imagine industry specific data and analysis results being pooled and studied in your own protected space. This is generally for larger enterprises, but some of the LLM logic is being embedded into chips and smaller libraries. The future could make this more accessible to smaller companies.
Practical AI Security Steps
for ABL Entities:
It takes more than a checklist, but we need a list to check-off the basics:
Know Your Al Provider: We need transparency about data handling, storage, and training practices. The AI companies post their policies and they should be read and understood. Again, do you trust them? If “NO“, find another provider.
Default to Private: Use private or enterprise-grade Al services (if license count and costs permit it) with strong contractual guarantees. Note that the “Enterprise” levels are just a guarantee not to train other models and it typically requires 150 monthly seats (pricing is customized to each user). Other levels of service (and guarantees) may be available. Evaluate that “Trust” question for how iron-clad any trust might be in AI engine of a given provider.
Don’t Reveal Prompts: The AI setup needs to prevent the AI from repeating the prompt back to the end users.
Obfuscate/Mask Before You Share: Apply data masking or obfuscation before sending data to any Al tool. This makes the ability to determine the context, origins, inferences, etc. almost impossible to reconstruct by the AI.
Layer Your Defenses: Firewalls and traditional safeguards are still essential. Data should still be encrypted in transit (almost impossible to not do that now with TSL / SSL), and encrypted at rest for the drive, folders and files. Multi-factor authentication (MFA) should be required for logins. Obfuscation of original data and DLP policies need to be established for the outgoing data being fed to the AI.
Consider Third-Party Scrubbing: For general company-wide Al use and email, third-party tools can scrub sensitive data in real time. This is a no-code approach that covers general email and AI needs more than the specific product line requirements found in ABL borrower source documents. Lots of vendors are available to help with this. Note that a canned package for your specific ABL needs (Operations, Audit, Underwriting, Compliance, etc.) would likely have these concepts built-in. Therefore, this third-party scrubbing is more of a general use for typical back and forth of emails and documents between members of your organization or related clients. There are AI specific products in this AI security space.
Proof-It: Review AI generated data with humans. Institutions need to have policies here. When AI generates several possible drivers, A human needs to investigate since the AI lacks the data to reach a specific conclusion. A list of possible answers is a great start, but it needs to be narrowed further. Go humans-Go!
Audit and Monitor: Regularly review access logs, data flows, and compliance with internal and external standards. Things keep changing, the security needs should be reassessed as threats and threat-blockers evolve. Per user logins, usage and related data can be reviewed for the heavy users.
AI Risks in ABL – Conclusions:
Don’t Let AI Prompts or Data “Into the Wild“:
Al can supercharge parts of ABL monitoring, but managing the new risks is critically important to prevent data breaches and preserve in-house intellectual property value and rights. The stakes are high, with a single misstep can expose sensitive borrower data, trigger regulatory action, and damage your institution’s reputation. Implementing AI settings and robust data protection such as obfuscation can reduce risks and meet compliance regulations, without putting your institution or the data at risk.
In ABL, trust is everything. Make sure your Al setup, partners, employee policies, and your data practices align to keep trust high.

Copyright © 2025 Clear Choice Seminars, Inc. All Rights Reserved
Bridge image – Joe Caplan from Patapsco State Park, MD, Other images licensed from NounProject.