Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Artificial intelligence

Artificial intelligence, or AI, is creating new possibilities in a variety of fields, including financial services. However, AI can also create risk, including safety and soundness risks for lending institutions.

As the regulator of the Farm Credit System, the Farm Credit Administration wants to ensure that System institutions have the resources they need to manage these risks while taking advantage of the benefits that AI affords.

In 2023, the FCA board adopted innovation as one of our agency’s top priorities, and in January 2024, the board issued an innovation philosophy statement (PDF) to lay out our position on innovation, including AI, in the System and the agency. The statement included a five-part framework to guide our efforts in this area.

Listed below are resources from our regulations, our examination manual, and outside sources that provide relevant guidance to System institutions on managing the risks associated with AI. Please note that we will continue to monitor the use of AI in the System, so please check this page periodically for updates and additional guidance.

Statutory and regulatory requirements

System institutions should consider all statutory and regulatory requirements when deciding whether and how to use AI in their business operations. Depending on their use, AI-based technologies could require compliance with any number of legal requirements. Several areas may have implications for AI, such as electronic communications, credit decisions, and risk management.

Also, System institutions should remain vigilant against model bias — to ensure that AI-based systems do not impair their ability to comply with nondiscrimination and data privacy laws. Using AI does not absolve any institution from the responsibility of complying with legal requirements.

In October 2023, FCA issued a final rule on cyber risk management , which has particular relevance to AI. This rule, which takes effect on Jan. 1, 2025, amends and expands portions of the following regulatory sections:

Examination manual

Before adopting technologies that rely on artificial intelligence, System institutions should review the following sections of the FCA examination manual:

External sources of guidance

System institutions may find these external sources helpful when considering AI-related risks:

For more information

If you work for a System institution and have questions about AI and our expectations for your approach to AI-based systems, please contact your examiner-in-charge.

Page updated: September 13, 2024