What’s your AI personality type?

HomeForumsAI for Marketing & SalesHow can I use AI to cluster and analyze Voice-of-Customer (VOC) feedback at scale?Reply To: How can I use AI to cluster and analyze Voice-of-Customer (VOC) feedback at scale?

Reply To: How can I use AI to cluster and analyze Voice-of-Customer (VOC) feedback at scale?

#125103
Jeff Bullas
Keymaster

Quick win (5 minutes): Grab 10 recent customer comments, paste them into an LLM with the prompt below and ask for a theme name + sentiment. You’ll instantly see whether common threads pop up — no engineering required.

Why this matters

Large, noisy VOC hides the few themes that move metrics. A small embedding + clustering pilot paired with a quick human check gives you prioritized, actionable themes in days instead of months.

What you’ll need

  • Data: 500–1,000 VOC items (30 days across channels)
  • Tools: spreadsheet or simple DB, embedding endpoint or low-code AI tool, clustering (HDBSCAN/DBSCAN or k-means), and an LLM for labeling
  • People: one data owner and 2 SMEs (product/support) for validation

Step-by-step (what to do, how to do it, and what to expect)

  1. Export & sample (1–2 hrs): pull 500–1,000 items into CSV. Expect ~20–30% noise.
  2. Clean (2–3 hrs): normalize, remove PII, dedupe. Output: id, text, channel, date.
  3. Embed (30–90 mins): convert texts to vectors. Expect ~1 hour per 1k items depending on tool.
  4. Cluster (30–60 mins): run HDBSCAN/DBSCAN for unknown counts or k-means for fixed groups. Tune min cluster size to avoid tiny, brittle clusters.
  5. Label & enrich (30–60 mins): for each top cluster, ask the LLM for a theme name, one-line summary, sentiment, priority, owner, and one representative quote.
  6. Validate (2–3 hrs): SMEs review a 5–10% sample across clusters; correct labels and flag noisy clusters.
  7. Prioritize & act (1–3 days): pick top 3 clusters by volume × negative sentiment × impact. Create tickets, assign owners, measure outcome.

Copy-paste AI prompt (use after you provide 10–50 sample texts from a cluster):

“You are an analyst. Given the following feedback items, provide for this cluster: 1) a concise theme name (3–5 words); 2) a one-sentence summary; 3) dominant sentiment (positive/neutral/negative) and a short explanation; 4) suggested priority (low/medium/high) with reason; 5) one suggested next action and recommended owner (Product or Support); 6) one representative customer quote. Feedback items: [paste items here].”

Worked example

  • Cluster: “Checkout failure on mobile” — negative, high → Action: urgent bug fix (Product) + support script.
  • Cluster: “Pricing confusion” — negative, high → Action: audit pricing UI + test new copy (Product/Marketing).
  • Cluster: “Keyboard shortcuts request” — neutral/positive, medium → Action: add to backlog for roadmap grooming.

Common mistakes & fixes

  • Too many tiny clusters — fix: raise min cluster size or merge similar clusters manually.
  • No validation loop — fix: require a 5–10% SME review each run and log corrections.
  • Ignoring time trends — fix: run rolling windows and compare week-on-week to catch bursts.

7-day action plan

  1. Day 1: Export 30 days of VOC; sample 500–1,000 items.
  2. Day 2: Clean data and remove PII/duplicates.
  3. Day 3: Generate embeddings and run initial clustering.
  4. Day 4: Label top clusters with the prompt above; review with 2 SMEs.
  5. Day 5: Prioritize top 3 clusters and create tickets/experiments.
  6. Day 6: Implement one quick win (support script or copy change).
  7. Day 7: Measure and report results; set weekly cadence.

Small, repeatable cycles beat perfect models. Start with the 5-minute LLM test, run the 1-week pilot, and lock in a human review loop. You’ll turn noisy VOC into prioritized actions fast.

— Jeff