Methodology

Research Design

The Enterprise GenAI Adoption Benchmark employs a mixed-methods research design, combining quantitative survey data with qualitative insights from follow-up interviews. This approach enables analysis of both the extent and nature of GenAI adoption across participating organisations.

Sample

Participation is voluntary and open to organisations meeting the following criteria:

  • Operate in an enterprise or large-organisation context
  • Can provide responses reflecting organisational AI strategy, delivery, or governance
  • Have begun exploring or implementing generative AI initiatives

We aim for coverage across sectors, organisation sizes, and operating models. Findings should be interpreted in the context of the current participant base, sector mix, and response coverage.

Data Collection

Survey Instrument

The full survey comprises 50+ structured questions across ten sections covering strategy, delivery, economics, vendors, data, governance, and workforce outlook. Questions include:

  • Multiple choice questions for quantitative analysis
  • Rating scales for assessing maturity and satisfaction
  • Open-ended questions for qualitative insights
  • Conditional branching based on prior responses

The survey typically takes 15-20 minutes to complete. Participants may save progress and return to complete the survey at their convenience.

Data Quality & Validation

Multiple measures ensure data quality and validity:

  • Pre-validation: Questions reviewed by subject matter experts and pilot tested with a small sample of organisations
  • In-built validation: Response validation for required fields and logical consistency checks
  • Post-collection validation: Automated checks for complete responses and outlier detection
  • Manual review: Sample of responses reviewed for quality and consistency

Data Analysis

Quantitative Analysis

Quantitative data undergoes statistical analysis including:

  • Descriptive statistics (means, medians, distributions)
  • Correlation analysis between variables
  • Comparative analysis across segments (size, sector, region)
  • Trend analysis where longitudinal data is available

Qualitative Analysis

Open-ended responses are analysed using thematic analysis to identify common themes, patterns, and insights that complement the quantitative findings.

Benchmark Scoring Framework

The benchmark report groups answers into six core metrics presented on a 0–100 scale: Strategic AI Ownership, Vendor Dependence Risk, Automation-at-Scale Readiness, Board Risk Assurance, Data Execution Readiness, and Transformation Delivery Pace.

  • Each metric combines multiple survey signals using weighted scoring.
  • Missing-safe weighting is used so unanswered components do not invalidate a full response.
  • Percentile positions are shown relative to anonymised peer cohorts.
  • Question-level benchmarks compare responses by industry and against all participants, including value-realisation, benefit-versus-spend, attribution, switching-friction, and operating-model/accountability signals.

Participants can view metric definitions in the Metric explainer.

Anonymisation & Privacy

All data is anonymised prior to analysis:

  • Organisation identifiers are replaced with anonymous codes
  • Personally identifiable information is stored separately
  • Small group reporting follows k-anonymity principles (N≥10)
  • Direct quotes are anonymised and attributed generically

See our Data & Privacy page for detailed information about data handling and protection.

Limitations

Several limitations should be noted when interpreting findings:

  • Self-reported data: Responses reflect participants' perceptions and may be subject to bias
  • Non-response bias: Organisations that choose to participate may differ from those that do not
  • Rapid evolution: The GenAI landscape changes quickly; findings represent a snapshot in time
  • Cohort context: Findings should be interpreted relative to the participant mix, sector representation, and response coverage in the current dataset

Definitions

Generative AI

For the purposes of this benchmark, "generative AI" refers to artificial intelligence systems capable of creating new content—including text, images, code, audio, and other media—in response to prompts or requests. This includes large language models, image generation tools, code generation systems, and similar technologies.

In Production

"In production" refers to GenAI tools and systems that are actively used in operational business processes, delivering value to end users or customers. This excludes: exploratory experiments, proof-of-concept projects, and tools used only by research or development teams without broader organisational deployment.

Enterprise

"Enterprise" refers to organisations with established business operations, including corporations, public sector bodies, non-profit organisations, and other entities with formal structures and operational capabilities.