It seems to be the most talked about feature, characteristic, value in seemingly every conversation today: politics, leadership, news, religion, law, technology, both online and in our neighbourhoods. For technologists trust is top of mind in where and how we consider deploying artificial intelligence and generative AI in particular. There are issues everywhere and a tremendous amount at stake if we get it wrong.
Before we can begin to consider how we make trustworthy AI we should consider what being trustworthy even means. Trying to define a concept without using the very same terminology is a challenging exercise. However, in this case I think it highlights the very characteristics you want in a trustworthy relationship. Here are some of the words I came up with:
- Confidence
- Thoroughness
- Transparency
- Traceability
- Reliance
- Viability
- Honesty
- Realistic
- Known
- Consistent
- Expected
- Factually accurate
- Reliability
- Credibility
- Insurance
- Insurability
- Insurance
- Hope or Belief
If we agree that trust is really all these things then who or what do you trust? Your spouse? Your neighbors? Your children? Their teachers? Your lawyer or accountant? Your boss? What would any of these have to do to destroy that trust and what would they have to do to rebuild that trust with you?
For businesses that rely on trust – trust in their products, brand, relationships, consulting – a critical lapse in honesty, transparency, reliability, or honesty can kill. History is full of businesses that were not able to overcome the trust gap: Anderson Consulting (overhauled their brand as Accenture), Lehman Brothers, Washington Mutual, or security conglomerate Tyco in the early 2000’s.
I think we all know that trust is something that takes years to create but any breach in trust — think confidence, reliability, honesty, credibility — can undermine trust immediately and irrevocably. In order to overcome this and insure our brand integrity and customer loyalty AI and Generative AI Risk Assessments and Management become ever more critical.
What about your AI implementation? There are so many examples of artificial intelligence failures, many of which we’ll explore in forthcoming blogs, that the bridge of trust may be a bridge too far. What methods or technologies can you put in place to insure a reliable, credible, responsible and trustworthy outcome? This is where technologies like an AI Trust Score™ become mission critical in Generative AI implementations.
Tumeryk offers an AI Trust Score™, modeled after the FICO® Credit Score, for enterprise AI application developers. This tool helps identify underperforming AI models and establishes automated guardrails to assess, monitor, and mitigate these models before they affect consumers or the public.