Six Essential Elements of a Responsible AI Model

By
Aaron Burciaga, Co-founder, Chairman & CEO
June 5, 2024
4 min read
Share this post

Six Essential Elements of a Responsible AI Model

New ethical and moral questions continue to emerge as we expand how we use artificial intelligence in business and government. This is undoubtedly a good thing. Developing new technologies without incorporating ethics, morals or values would be careless at best, catastrophic at worst.

This is also a gray area. For years, I’ve used "ethical AI" as a catchall phrase for the standards and practices that principled organizations should build into their data science programs. But what exactly is ethical? What is moral? According to whom?

Ethics are principles of right and wrong, usually recognized by certain institutions, that shape individuals’ behavior. Morals are shared social or cultural beliefs that dictate what is right or wrong for individuals or groups. You don’t need to be moral to be ethical, and vice versa, though the two terms are often used interchangeably.

This quandary leads to the needful shift in framework, one that focuses on “responsible AI” to better capture these nuanced and evolving ideas.  

What Is Responsible AI?

Responsible AI is composed of autonomous processes and systems that explicitly design, develop, deploy and manage cognitive methods with standards and protocols for ethics, efficacy and trustworthiness. Responsible AI can’t be an afterthought or a pretense. It has to be built into every aspect of how you develop and deploy your AI, including in your standards for:

  • Data
  • Algorithms
  • Technology
  • Human Computer Interaction (HCI)
  • Operations
  • Ethics, morals and values

It’s easy to announce your values to the world, but it's much harder to take actions that require daily operational discipline needed to live your values. A responsible AI framework is rooted in big ideas, like what is ethical or moral, and everyday decisions, like how you treat your data and develop your algorithms.  

Six Key Elements Of A Responsible AI Framework

I’m a firm believer in the maxim, often credited to Albert Einstein: “Everything should be made as simple as it can be, but not simpler.” This has been a guiding principle as I’ve been studying different AI models and developing a universal model for reference across industries and academia.

Within the proposed framework, responsible AI must be all of the following:

1. Accountable: Algorithms, attributes and correlations are open to inspection.

2. Impartial: Internal and external checks enable equitable application across all participants.

3. Resilient: Monitored and reinforced learning protocols with humans produce consistent and reliable outputs.

4. Transparent: Users have a direct line of sight to how data, output and decisions are used and rendered.

5. Secure: AI is protected from potential risks (including cyber risks) that may cause physical and digital harm.

6. Governed: Organization and policies clearly determine who is responsible for data, output and decisions.

Imagine the framework as a pie chart cut into six slices. You’re not aiming for perfection overnight, but you can scale from the center toward the edges, progressing toward gradually filling in more of each slice. You can ultimately right-size the capability of each wedge according to need and resources. For example, your transparency might only be at 15% now, but after a year of concentrated effort, it could go up to 45% with a goal state of 80%.

The Department of Defense’s framework has five components, the White House has nine and the intelligence community has 10. Various software, technology and other solutions providers have frameworks that range from three to 14. I recommend starting in the middle with this consolidated and focused list of six and subsequently fine-tuning it to the needs of your business. Always keep the model as simple as possible. If you want to expand it, examine your reasons first. Does the component you want to add not fit in any existing category? Are you tempted to grow your list due to some bias? For example, the intelligence community broke “goals and risks” and “legal and policy” into two separate items, whereas I think they could be combined in one governance category.

If the size, mission and application of AI warrants more oversight, I advise considering an additional step of establishing an AI ethics board. This isn’t necessary until you are ready to make a full investment and formalize a board to review what features characterize a bespoke responsible AI framework for your organization. Otherwise, it’s best to keep your responsible AI focused on the distilled and resilient six-part framework shared above. If you are considering creating an ethics board, ask what I call “salty questions” to take an honest look at your motivations and next steps:

  • Is an AI ethics board appropriate or necessary?
  • What should be our core ethical considerations?
  • What kind of strategy do we need?
  • How could we assess risk?
  • Are there particular areas where we will need board oversight?
  • How could we determine if the use of AI will result in discriminatory outcomes?
  • How could we assess bias?
  • Should we require our AI systems and algorithms (and those of our partners) to be open to inspection? How will we communicate resulting decisions?  
  • Who will be accountable for unintended outcomes of AI?
  • Who will be responsible for making things right?

Responsible AI is the path forward to navigate how we counterbalance risk, earn trust and overcome bias as we take advantage of AI’s unlimited potential. Future AI, both humans and systems, must have strong and growing measures of accountability, impartiality, resiliency, transparency, security and governance.

Article originally published on Forbes.com.

Get started today

Book a consult to see how your company can use advanced AI to supercharge your data, automate advanced workflows, and unlock new opportunities.