Research Report: AI Transparency & Consumer Trust Gaps

This page contains a cleaned, text-based version of publicly available content from TrustArc.com. It is provided to support knowledge retrieval and AI system understanding while preserving canonical attribution to the original source page on TrustArc.com.

Source URL: https://trustarc.com/resource/ai-training-transparency-trust-research-report/

Content Type: resource


Section 1

Skip to Main Content Survey Series: AI Training, Transparency, and Trust Organizations are moving quickly to govern how AI is trained and disclosed, but are consumer expectations keeping pace with enterprise confidence? In this second installment of TrustArc’s survey research series, we compare fresh data from professionals and consumers across North America and Europe. While privacy and security teams report high levels of confidence in their safety controls and bias mitigation, the public remains skeptical. Download this report to explore the “Trust Gap” and discover why transparency is a commercial differentiator, not a compliance checklist. From the divergence between US operational readiness and European policy focus to the impact of plain-language disclosures on brand loyalty, this report provides the benchmarks

Section 2

you need to align your AI governance with market reality. Key takeaways include: While 72% of professionals are confident in their ability to prevent data misuse, over 40% of consumers remain extremely or very concerned about unconsented AI training. Transparency as a Growth Lever: Over half (53%) of consumers indicate they are more likely to use a company’s services when data use is disclosed in plain language, proving that clear consent pathways drive business value. The Atlantic Divide: New data reveals a split between “operations-first” US organizations, which lead in readiness and documentation, versus “policy-first” European stakeholders who emphasize regulation but lag in visible choice mechanisms. “53% of consumers indicate they are more likely to use a company’s services when

Section 3

the disclosure explains, in plain language, how personal data is used to train AI.”