5 Questions Board Directors Can Ask to Shape AI Strategy

by
Bill Aimone
April 17, 2025

Organizations are all using AI in some way, whether employees are using it on their own or company-wide tools have been implemented. No matter the capacity, a comprehensive AI strategy is a must.

As organizations grow and AI alters business processes, the strategy informs how AI is managed, maintained, and championed. Leadership drives this strategy and sets the tone for the whole company from the start.

As a Board Director, here are some questions to help shape conversations with leadership around AI so these initiatives are successful and ultimately drive long-term value.

1. Has leadership set a tone at the top with AI guiding principles?

Leadership's attitude toward AI sets the tone for everyone. This doesn’t just include executives, although it usually starts there. It also includes managers, directors, and department heads who influence how teams view and adopt AI, which can determine the success of AI initiatives.

Establishing strong guiding principles around the use of AI is crucial. At the core, they shape how leadership approaches AI. It also guides decision making and creates alignment among leaders for a unified vision. We have helped a number of companies (including our own) establish AI guiding principles. Here’s a quick list of recommended guiding principles that are a helpful jumping off point.

2. Where do we see AI advancing our organization beyond simple efficiencies?

AI's role will undoubtedly grow. While we don’t know the exact details of how AI will function in the future, we know it will be significant and we can make reasonable predictions. Leaders should think beyond current uses like data analysis or simple Q&A. Where can AI provide truly strategic advantages for the business?

Additionally, are there any existing opportunities to expand the use of AI to create more efficiency in other areas of the organization? Consider what’s possible now and in the future.

3. How are we ensuring employees are not exposing critical information?

Security is vital. Employees might be exposing sensitive company data without knowing it. This is particularly true if they use public AI tools or AI tools linked to personal accounts. Tools like ChatGPT, Gemini, etc. are not inherently confidential. The organization needs strong safeguards to prevent critical information from being compromised, putting privacy at risk.

4. What key controls have been established to manage data integrity?

AI systems need data to function. The quality of AI's output relies directly on the quality of the data it uses. Is company data accurate, complete, and up to date? Incorrect, duplicated, or missing data will hinder AI's effectiveness. Organizations need processes and governance to make sure data stays clean and organized so the AI solution functions as expected.

5. Do we have policies in place to ensure the ethical use of AI?

Concerns about legal liability, data privacy, ethical considerations, and regulatory compliance make some companies hesitant to adopt AI. This is especially true in areas like HR and finance. Organizations need policies that are comprehensive, actionable, and effective to dictate how AI can be used responsibly without putting the company at risk. See our recommendations for making AI policies more robust here.

At Trenegy, we help organizations develop AI strategies and leverage fit-for-purpose AI solutions. To chat more about this, email us at info@trenegy.com.