DUO’s AI SOFTWARE USE POLICY

Generative AI has become a transformational technology and is increasingly useful. It should be actively used for research purposes as part of the PR and digital marketing toolkit. AI tools must be used responsibly to enhance our ability to serve our clients. However, the responsible use of AI must always be subject to human judgement and oversight to avoid bias, misuse, inadvertent risks of harm, and breaches of our contracted non-disclosure agreements. The following guidance on the use of generative AI in our work for clients is based on the PR Council Code of Ethics and Principles.

Generative AI has become a transformational technology and is increasingly useful. It should be actively used for research purposes as part of the PR and digital marketing toolkit. AI tools must be used responsibly to enhance our ability to serve our clients. However, the responsible use of AI must always be subject to human judgement and oversight to avoid bias, misuse, inadvertent risks of harm, and breaches of our contracted non-disclosure agreements. The following guidance on the use of generative AI in our work for clients is based on the PR Council Code of Ethics and Principles.

Protection of Client Information

We protect the integrity of client information:

  • Confidentiality: Use caution when putting confidential client information into a generative AI tool or platform. For example, do not use AI to create the first draft of a press release about a new product or to draft internal memos for staff. Other examples of confidential information include, but are not limited to:
    • Client business plans
    • Client or prospective client documentation
    • Paid or confidential analyst reports for summarisation
    • Paid market insights for key findings
    • Confidential research data
    • Text related to sensitive internal employee communications
  • If you are unsure, please ask a BUD or the CEO for input

Creative Work and Copyright

  • AI-Generated Images: Do not use generative AI images as final creative assets for client campaigns. The AI-generated work may be at risk of copyright infringement, and the work itself cannot be entirely protected under copyright laws since humans did not create it. Additionally, certain generative AI tools may limit the user’s ownership rights in the work the platform generates.

Commitment to Accuracy

We are committed to accuracy:

  • Verification: Always check and source the data provided by generative AI tools. Validate the claims with your own search of the source. Generative AI chat tools can convincingly fabricate information.
  • Plagiarism and Infringement: Always check for inadvertent plagiarism, copyright infringement, or trademark infringement in AI-generated output.
  • Supplier Transparency: Ask suppliers how they use AI in their tools (such as sentiment analysis) and work to eliminate biases and improve accuracy. Ensure transparency around their prompts and inputs to mitigate the risk of inadvertent infringement of third-party rights.
  •  

Transparency in Third-Party Relationships

We believe that our clients and the public are best served when third-party relationships with spokespeople, partners, and suppliers are open and transparent:

  • Disclosure: As an employee, disclose to your line manager if you use generative AI tools as part of the drafting or creation process.
  • Voice and Music AI: Never use voice/music-generating AI tools to mimic the voice or style of a real person. Corrections using AI require mutual agreement and a signed agreement before proceeding. Additional requirements may apply if the voiceover talent is a union member, or if the agency or client are signatories to a union contract.
  • Respect for Creators: Do not prompt generative AI to develop creative content that mimics the style of a specific artist.

Valuing Diversity and Inclusion

We value diversity and inclusion in our profession:

  • Bias Awareness: Beware of biases incorporated in AI-generated output, both in writing and in developing imagery for a campaign.
  • Translation Accuracy: Do not rely on generative AI tools to translate documents into other languages. The quality of the transcreation or translation might not be accurate.
  • Diverse Experiences: Do not use generative AI as a replacement for diverse experiences, insight, or engagement. Utilise diverse perspectives within the agency to review content created by generative AI tools to ensure no bias is accidentally overlooked or shared externally.
  • Authentic Representation: Do not use generative AI tools to create imagery, likenesses, or avatars that create the appearance of diversity instead of working with diverse talent.