If you’re still hesitant to adopt Generative AI at your company, you’re not alone.
Technology vendor, Teradata, sponsored a survey of 900 execs who reported that more governance is needed to ensure the quality and integrity of gen AI insights, with 66% expressing concerns around its potential for bias and disinformation.
However, I think most would agree it’s not a matter of “if” you’re company will ever invest in Gen AI, but “when”. And just because AI tech and legislation aren’t full proof, doesn’t mean you should sit back and wait.
Instead, you can turn to Salesforce’s recent AI Acceptable Use Policy for inspiration and guidance.
Drafting an acceptable use policy for a novel technology like Generative AI can be particularly tricky because there aren’t many examples to follow. But Salesforce is a recognizable brand and pioneer whose been transparent about their process to understand AI and safeguard the company’s data and values.
In particular, Salesforce recommends:
- Accuracy and verifiable, traceable answers.
- Avoiding bias or privacy breaches.
- Supporting data provenance and including a disclaimer on AI-generated content.
- Identifying the appropriate balance between human and AI.
- Reducing carbon footprint by right-sizing models.
These are their guiding principles, yours might be different. But it doesn’t hurt to start drafting your policy and incorporating the type of language you’ll want to protect your products, employees, and customers.