Win At Business And Life In An AI World

RESOURCES

  • Jabs Short insights and occassional long opinions.
  • Podcasts Jeff talks to successful entrepreneurs.
  • Guides Dive into topical guides for digital entrepreneurs.
  • Downloads Practical docs we use in our own content workflows.
  • Playbooks AI workflows that actually work.
  • Research Access original research on tools, trends, and tactics.
  • Forums Join the conversation and share insights with your peers.

MEMBERSHIP

HomeForumsPage 89

Jeff Bullas

Forum Replies Created

Viewing 15 posts – 1,321 through 1,335 (of 2,108 total)
  • Author
    Posts
  • Jeff Bullas
    Keymaster

    Yes — and it’s easier than you think. You can build interactive worksheets that give instant feedback and even auto-grade many answers with simple tools and a little AI help.

    Quick context: for beginners the fastest wins come from combining a quiz tool that handles interaction and auto-grading (for closed questions) with an AI assistant for scoring written answers and giving feedback.

    • What you’ll need:
      • A free Google account (for Google Forms and Sheets) or Microsoft account (Forms).
      • An AI chat tool (ChatGPT or similar) for grading and writing feedback — free tier is fine for testing.
      • Optional: a spreadsheet to collect responses and run basic automation.

    Step-by-step (quick setup):

    1. Create a new Google Form and turn on “Make this a quiz” in Settings.
    2. Add multiple-choice and checkbox questions — assign correct answers so Google auto-grades instantly.
    3. For short-answer items where exact text is required, use response validation or provide a set of acceptable answers.
    4. Collect responses into Google Sheets (Responses > Link to spreadsheet).
    5. For open-ended answers, export the student answers and use an AI grading prompt (paste answers in batches) to get scores and feedback.

    Copy-paste AI prompt (use as-is):

    Grade the student answer below on a scale of 0–5 using this rubric: 5 = complete, accurate, clear examples; 4 = minor omissions; 3 = partially correct; 2 = limited understanding; 1 = serious errors; 0 = irrelevant. Give a one-sentence reason and one tip to improve. Student answer: “{STUDENT_ANSWER}”. Question: “{QUESTION}”.

    Worked example (business email worksheet):

    • Create a Form with 6 questions: 3 multiple-choice (polite closings, subject line choice), 2 short-answer exact (date format), 1 open-ended (draft a 2-line polite follow-up email).
    • Google Forms auto-scores the first 5. For the open-ended draft, copy responses into the AI prompt above to get a 0–5 score plus feedback.
    • Return scores to students by email or paste back into the sheet and use a mail merge add-on if you want automation.

    Do / Do-not checklist

    • Do: start with mostly multiple-choice for instant wins.
    • Do: add a clear rubric for every open question.
    • Do: batch-grade open answers with AI — 10–20 at a time.
    • Do-not: rely on AI to grade high-stakes tests without human spot-checks.
    • Do-not: make rubrics vague or expect 100% accuracy on nuance.

    Common mistakes & fixes

    • Mistake: too many open questions — Fix: swap to multiple-choice or short answer where possible.
    • Mistake: no rubric — Fix: create a 0–5 scale with key points for each score.
    • Mistake: privacy concerns — Fix: remove names or get consent before using AI tools.

    Action plan (start today)

    1. Make a 5-question Google Form quiz (20–30 minutes).
    2. Collect 5 real responses (test with friends or colleagues).
    3. Use the AI prompt to grade the open answers and refine your rubric.
    4. Repeat and add one automation step (mail merge or simple script).

    Reminder: begin small, validate with real students, and add complexity only when the basic flow works. Instant feedback plus human oversight gives the best learner outcomes.

    Jeff Bullas
    Keymaster

    Nice point — yes: your workflow is sensible. Use AI to scale drafts, then layer human review to protect tone, pedagogy and culture. Here’s a compact, action-focused addition you can try today to get fast wins.

    What you’ll need (quick checklist)

    • Original lesson text, slides or assessment items.
    • Target language, region and student profile (age, formality).
    • Short glossary of key terms and preferred phrasing.
    • A reviewer (teacher or native speaker) and 1–2 real students for a pilot.
    • 10–20 minutes per lesson for review and tweak after AI output.

    Step-by-step — do this now

    1. Pick one lesson (20–30 minutes) as your pilot.
    2. Prepare a one-paragraph instruction for the AI (tone, audience, glossary). Use the prompt below.
    3. Ask for two outputs: (A) literal, (B) localized — and a sentence-by-sentence confidence note.
    4. Quick review: teacher scans for instructions, assessments and examples that might confuse learners.
    5. Pilot in class, collect 3 quick student reactions (understandable? natural? friendly?).
    6. Update glossary and prompt checklist with the top 5 recurring edits, then repeat.

    Copy-paste AI prompt (robust)

    Translate the following classroom material from English to [TARGET_LANGUAGE] for students aged [AGE_RANGE] in [REGION]. Preserve a warm, encouraging teacher voice. Keep terms from this glossary unchanged: [GLOSSARY]. Produce two versions: (A) literal translation; (B) localized version adapted for [REGION] students with natural phrasing and culturally relevant examples. For each sentence, include a confidence score (high/medium/low) and flag any cultural references or ambiguous phrases with suggested edits. Maintain the original learning objectives exactly.

    Short worked example

    1. Original: “Try this activity with a partner — it’s a fun way to learn.”
    2. Literal: “Prueba esta actividad con un compañero — es una forma divertida de aprender.” (confidence: high)
    3. Localized: “Hagan esta actividad en parejas; les ayudará a aprender de forma práctica y amena.” (confidence: medium — note: check gendered language in class)
    4. Human tweak: adjust gendered words and swap any unfamiliar example for a local equivalent.

    Common mistakes & fixes

    • Too literal: Ask for a localized version and cultural notes.
    • Tone drift: Specify exact voice (warm, encouraging) and give two short sample sentences.
    • Ambiguous assessments: Require the AI to preserve learning objectives and flag unclear items.

    Action plan — 7-day sprint

    • Day 1: Choose pilot + make glossary.
    • Day 2: Run prompt and get A/B outputs.
    • Day 3: Teacher review (20 min) and tweak.
    • Day 4: Pilot in class, gather feedback.
    • Day 5: Update glossary and prompt with top edits.
    • Day 6–7: Repeat with 1–2 more lessons and scale if results are steady.

    Closing reminder: Aim for fast iterations. AI gives speed; human review secures quality. Translate, test, tweak — small cycles build trust and steady improvement.

    Jeff Bullas
    Keymaster

    Spot on: locking one style, one palette, and one camera language is the fastest way to cut rework. Let’s add two pro moves that make AI boards feel “same character, new pose” rather than a new person each frame.

    Two high-value upgrades

    • Make a Style Plate first: one image that shows your character front view, outfit, brand palette, and a simple background. Reuse it as the visual anchor for every frame.
    • Face Tile trick: crop a clear head-and-shoulders image (about 512×512). Use it as the identity reference when generating or inpainting. It dramatically reduces drift across frames.

    What you’ll need (beyond your list)

    • 1 Style Plate (character + palette + background).
    • 1 Face Tile (tight crop of the character’s face).
    • 3–5 pose references or quick stick-figure sketches for key actions.
    • Hex codes for brand colors and a high-res logo (place the logo in an editor, not by AI).

    Do / Do not

    • Do keep the same outfit, hairstyle, and lighting across all frames.
    • Do ask for “minimal background” and “no text” to avoid clutter and weird AI typography.
    • Do reuse the same seed or the same image reference for every frame if your tool supports it.
    • Do name files clearly: AdName_15s_F01_v1.png, F02_v1.png, etc.
    • Don’t change style words mid-project (e.g., “painterly” then “vector”). Pick one.
    • Don’t rely on AI for exact logos or legal text. Add those manually later.

    Step-by-step (with the consistency layer)

    1. Create the Style Plate (1 image): character front view, outfit, brand palette swatches on the side, simple background. Save it.
    2. Make the Face Tile: crop the Style Plate to just the face (sharp eyes, neutral expression). Save it.
    3. Write your visual brief in one short paragraph (tone, shot sizes, palette, lighting, background).
    4. Generate Frame 1 using the Style Plate and Face Tile as references. Adjust until the look is right.
    5. Generate Frames 2–6 with the same references/seed. If a pose is off, inpaint only the arms/hands or head, not the whole image.
    6. Assemble an animatic: 15–30s total, 3–4s per frame. Add temp music/VO and adjust timing.
    7. Brand polish: correct edges, place logo and CTA in an editor. Export clean PNGs for motion.

    Copy-paste prompt (robust template)

    Use this twice: first to make the Style Plate, then for each frame. Replace words in [brackets].

    Global Style Block (paste at the top of every prompt)“Clean flat-vector art, uniform thin stroke, soft geometric shapes. Warm morning light, soft shadows. Brand palette: [HEX1], [HEX2], [HEX3], off-white background. Minimal modern [environment], no clutter. Consistent character identity and outfit across all frames. Aspect: [16:9 or 9:16]. No text, no watermark.”

    Style Plate prompt“Create a style plate: front-view portrait of a friendly adult [gender/age range], wearing [outfit], neutral expression. Include a small swatch row of the brand colors on the side. Background: simple gradient. Purpose: this image will define character identity, lighting, and palette for all storyboard frames.”

    Frame Card prompt (run one per frame)“Using the same style as the Style Plate and matching the same character identity and outfit, generate Frame [#]: [shot size and angle], [action], [camera note], [background note]. Keep the palette from the Style Plate. Use minimal background, no text. Ensure the face matches the Style Plate. Save as: [Project]_F[#]_v1.png.”

    Worked example (6-frame coffee subscription ad)

    Copy, paste, and replace bracketed parts. Attach your Style Plate and Face Tile as references if your tool supports them.

    Global Style Block: Clean flat-vector art, uniform thin stroke, soft geometric shapes. Warm morning light, soft shadows. Brand palette: teal #00AA99, coral #FF7A66, charcoal #2E2E2E, off-white #F6F6F6. Minimal modern kitchen, no clutter. Consistent character identity and outfit across all frames. Aspect: 16:9. No text, no watermark.

    Frame 1: Medium close-up, adult coffee lover holding a smartphone near a coffee mug, camera 15° to the right, subtle steam in background.Frame 2: Over-shoulder close-up of the phone showing a clean UI placeholder, bright highlight on screen, shallow depth look.Frame 3: Medium shot, user taps “Subscribe” (placeholder), relaxed smile, same outfit and hairstyle, soft rim light.Frame 4: Cutaway product moment: fresh beans pouring into a jar on a counter, same palette, minimal props.Frame 5: Medium shot, user receives a box at the door, warm light spill, consistent kitchen palette through the doorway.Frame 6: Wide shot, user at counter making coffee, open space on right for logo/CTA, bold simple shapes, high contrast.

    Insider consistency tips

    • Identity lock: when possible, set “character reference strength” moderately (e.g., 50–70%). Too high = stiff; too low = drift.
    • Pose control: if your tool supports pose guidance, feed it a quick stick-figure or a photo reference for hand/arm accuracy.
    • Inpaint small: only fix the face or hands. Lock areas you like so the AI doesn’t redraw the whole scene.
    • Keep a seed log: note the seed/settings for frames that look right so you can reproduce them.

    Common mistakes & quick fixes

    • Inconsistent faces — Use the Face Tile for every frame; nudge identity strength up slightly for close-ups.
    • Messy backgrounds — Add “minimal background, soft gradient depth, no clutter.”
    • Off-brand colors — Include exact hex codes in every prompt. Remove extra color words.
    • Text and logo artifacts — Always add logos/text in your editor after generation.

    Action plan (next 60–90 minutes)

    • 15 min: Build the Style Plate and Face Tile.
    • 30–45 min: Generate 6 Frame Cards (1–2 tries per frame). Inpaint faces/hands as needed.
    • 15–30 min: Drop frames into a timeline, add temp music/VO, set durations, export the animatic.

    What to expect

    • First pass in under 90 minutes with solid pacing and a consistent look.
    • One more round (30–60 minutes) to fix drift and polish edges.
    • Final-ready boards within 1–2 days including stakeholder tweaks.

    The shortcut isn’t more prompts; it’s one great Style Plate plus disciplined reuse. Nail those, and the rest flows.

    Jeff Bullas
    Keymaster

    Spot on: starting conservative with confidence thresholds is the safety net that protects margins while you learn. Let’s add one more layer that moves this from “smart triage” to “profit-aware automation”: bake costs and policy into the AI so it chooses the cheapest acceptable path and writes clear, time-bound updates automatically.

    Do / Do not (to keep costs down and trust high)

    • Do encode repair, shipping, and replacement costs so AI decisions reflect real money.
    • Do set a hard repair cap (e.g., “if expected repair cost ≥ 60% of replacement, prefer replace”).
    • Do enforce photo quality and required fields before triage.
    • Do version your policy/prompt so you can audit decisions (“policy_version: v1.2”).
    • Do generate a promise date in every message to reduce follow-ups.
    • Do not let the AI invent rules; pass policy and price data explicitly.
    • Do not auto-handle when serial is missing or photos are blurry—kick back a friendly request.
    • Do not skip an audit trail; log inputs, AI result, human override, and final outcome.

    What you’ll need

    • SKU master with replacement value, warranty length, and common faults.
    • Cost table: inbound/outbound shipping, bench diagnosis, standard labour rate, typical parts costs.
    • Photo guidelines with 2–3 examples you’ll reference in the auto-reply.
    • Ticket fields for: category, confidence, reason, parts list, promise date, cost_estimate, policy_version.
    • AI assistant that can read a small JSON policy block and return structured JSON.

    Step-by-step: make the AI cost-aware and customer-friendly

    1. Encode policy as data: keep a short JSON “policy pack” per product family (warranty months, repair cap %, cost table, abuse rules, SLAs). Version it.
    2. Add cost logic: ask the AI to estimate total repair cost (labour + parts + shipping) and compare with replacement value using your cap rule.
    3. Photo gate: run a quick image-quality check first; if low quality or missing angles, reply with a one-click photo request and pause triage.
    4. Structured output: require the AI to return: category, confidence, reason, parts/tools, promise_date, cost_estimate, actions_for_customer, actions_for_ops, flags, policy_version.
    5. Customer update: auto-generate a warm, three-sentence message with clear next steps and a date.
    6. SLA clock: start timers based on category; send proactive day-2 and day-5 updates.
    7. Feedback loop: log final cost and outcome; adjust your repair cap or thresholds monthly.

    Copy‑paste prompt: cost-aware triage + customer next steps

    “You are an RMA triage assistant. Use the policy JSON and the customer intake to decide the cheapest acceptable action that meets policy. If data is missing or photos are low quality, return category=’Need More Info’.

    Policy (JSON): {POLICY_JSON}

    Intake: order#: {ORDER}, purchase_date: {DATE}, serial#: {SERIAL}, sku: {SKU}, photos: {PHOTO_LINKS}, issue: {DESCRIPTION}

    Tasks: 1) Validate warranty from policy. 2) Estimate repair_cost = labour_hours*labour_rate + parts_cost + shipping_in + shipping_out + bench_fee. 3) Compare repair_cost to replacement_value and apply repair_cap_percent. 4) Set category to one of: In Warranty – Repair, In Warranty – Replace, Out of Warranty – Quote Repair, Refund Requested, Possible Abuse, Need More Info. 5) Produce a customer-friendly, three-sentence message with a promise_date per SLA. 6) Return structured JSON only.

    Output JSON keys: category, confidence (0–1), reason, parts_tools, labour_hours_estimate, cost_estimate_total, actions_for_customer, actions_for_ops, promise_date, flags (e.g., missing_serial, low_quality_photos, visible_damage), policy_version.”

    Policy JSON template (fill and pass into {POLICY_JSON})

    {“policy_version”:”v1.2″,”warranty_months”:12,”repair_cap_percent”:0.6,”sla_days”:{“repair”:7,”replace”:3,”refund”:5},”costs”:{“labour_rate”:40,”bench_fee”:15,”shipping_in”:12,”shipping_out”:12},”sku_overrides”:{“SKU-100”:{“replacement_value”:120},”SKU-200″:{“replacement_value”:250}},”abuse_rules”:{“visible_cracks”:true,”liquid_damage”:true},”photo_requirements”:{“min_count”:2,”checklist”:[“front close-up”,”serial label”]}}

    Quick image-quality gate prompt (run before triage)

    “Rate photo set quality for RMA. Inputs: {PHOTO_LINKS}. Requirements: at least {MIN_COUNT} photos; must show serial label and the fault area in focus. Return JSON: quality (‘good’|’poor’), missing_angles (list), notes. If ‘poor’, draft a 2-sentence request telling the customer which photos to add.”

    Worked example

    Intake: order# 7841, sku SKU-100, serial A1B2C3, purchase 10 months ago, two clear photos show a stuck power button. Policy: 12-month warranty, replacement value $120, labour_rate $40/hr, bench_fee $15, shipping both ways $24, repair_cap 60%.

    • AI estimates: labour 0.5 hr ($20) + part $9 + bench $15 + shipping $24 = $68.
    • Repair cap = 60% of $120 = $72. Repair is under cap.
    • Category: In Warranty – Repair (confidence 0.92). Promise date: today + 7 business days.
    • Auto-actions: create repair ticket, include parts “switch S-09, torx T5”, generate label, send customer message with timeline.

    Common mistakes & fixes

    • Forgetting costs → Fix: always pass a policy JSON with replacement_value and standard costs.
    • Letting the AI guess → Fix: restrict outputs to allowed categories; reject anything else.
    • Blurry photos slow everything → Fix: run the image gate and auto-request specific retakes.
    • No promise dates → Fix: compute from SLA and insert a date in every message.
    • Drift in rules → Fix: version your policy and store it with the ticket; update monthly.

    48‑hour quick win

    1. Build the policy JSON for your top 5 SKUs (replacement value, warranty months, costs).
    2. Add the image-quality gate; auto-reply for missing/poor photos.
    3. Run the cost-aware triage prompt with conservative thresholds (auto-handle only when confidence ≥ 0.9).

    7‑day rollout

    1. Day 1–2: Wire policy JSON and triage prompt; add fields to your ticket system.
    2. Day 3: Test on 50 past cases; compare AI decision vs final outcome; tune repair_cap and parts lists.
    3. Day 4: Add customer message template with promise dates; set SLA timers.
    4. Day 5–6: Pilot on 5% live tickets; log confidence, action, and human overrides.
    5. Day 7: Review costs saved vs replacements; adjust thresholds; expand to 15–25% of cases.

    Expectation setting: the first wins come from stopping back-and-forth (photo gate), giving instant timelines, and avoiding uneconomic repairs with the cap rule. Keep thresholds high at first, and widen only as your overrides drop.

    Jeff Bullas
    Keymaster

    Hook: Yes — AI can translate classroom materials well, but not perfectly. It’s fast and useful for first drafts and accessibility. You’ll still need a human touch to preserve nuance, pedagogy and classroom voice.

    Context: Teachers and trainers want accurate translations that keep tone (encouraging, formal, playful), preserve learning objectives, and respect cultural nuance. AI is excellent for speed and consistency; it’s less reliable with idioms, humor, assessment wording and subtle pedagogical cues.

    What you’ll need:

    • Original lesson content (text, slides, prompts).
    • Target language and audience (age, formality level, region).
    • Short glossary of key terms or preferred translations.
    • Time for a quick human review (teacher or native speaker).

    Step-by-step: How to do it

    1. Pick a small pilot: 1–3 lessons.
    2. Prepare a brief instruction for the AI: specify tone, audience, and any terms to keep.
    3. Run the translation and ask for two variants: literal and localized.
    4. Compare both variants against your pedagogical goals.
    5. Have a native speaker or colleague review and mark necessary tweaks.
    6. Iterate and expand once you’re confident.

    Practical prompt you can copy-paste:

    Translate the following classroom material from English to [TARGET_LANGUAGE]. Preserve the teacher’s tone (warm and encouraging), keep technical terms from this glossary unchanged, maintain the original learning objectives, and flag any cultural references that should be adapted. Provide two versions: (A) literal translation, and (B) localized version suitable for students in [REGION]. After each version, list the key changes you made.

    Worked example (short):

    1. Original sentence: “Try this activity with a partner — it’s a fun way to learn.”
    2. AI literal translation (example): “Prueba esta actividad con un compañero — es una forma divertida de aprender.”
    3. AI localized variant (example): “Realicen esta actividad en parejas; les ayudará a aprender de forma práctica y amena.”
    4. Human tweak: Replace “compañero” with “compañera/o” or “compañeros” based on class mix; keep cultural examples relevant.

    Common mistakes & fixes

    • Over-literal phrasing: Fix by asking for a localized version or examples tied to the students’ culture.
    • Shifted tone (too formal or too casual): Fix by specifying the level of formality in the prompt.
    • Lost pedagogical intent: Fix by including learning objectives and a glossary in the prompt.

    Action plan — quick checklist:

    • Do: Start small, include objectives and a glossary, request two variants.
    • Do not: Publish translations without a human review.
    • Do: Pilot with real students and collect feedback.
    • Do not: Assume idioms, jokes, or assessments are correctly adapted.

    Closing reminder: Use AI for speed and consistency, but keep humans in control for nuance. Translate, test, tweak, repeat — that workflow gives you fast wins and steadily improving quality.

    Jeff Bullas
    Keymaster

    Hook — Yes, you can train AI to be your first filter and save hours.

    Short version: use AI to estimate realistic resale value, list likely fees, and flag photo/description red flags. Then do a fast human check before you buy. The combo cuts false positives and speeds up your decision-making.

    What you’ll need

    • Accounts on the marketplaces you hunt (eBay, Facebook, local sites).
    • Google Sheets or similar to log listings and results.
    • An AI chat tool (ChatGPT-type) or a simple automation that can run a prompt on each listing.
    • A clear buy rule: minimum net margin (I use 25%+) and max repair cost.
    • A 3-point photo/condition checklist.

    Quick do / do-not checklist

    • Do use the lower of two recent sold prices for conservative estimates.
    • Do include all fees, payment charges and acquisition shipping in your math.
    • Do record every evaluated listing — data tunes your rules fast.
    • Do not chase “maybe” wins without photos that confirm condition.
    • Do not ignore serial/model mismatches or missing key accessories.

    Step-by-step routine (under 5 minutes per listing)

    1. Open listing. Find two recent sold prices for same model — use the lower.
    2. Ask the AI (copy-paste prompt below) to estimate 7–14 day resale price, fees and net profit.
    3. Compute net margin: expected net resale − (listing price + acquisition shipping + repair estimate).
    4. Run the 3-photo red-flag check: missing parts, water/damage signs, serial/model mismatch.
    5. If margin ≥ your threshold (e.g., 25%) and no major red flags, shortlist for a quick seller Q&A or purchase.

    Example (realistic quick calc)

    • Listing price: $80
    • Lower sold price: $140 → expected net resale at 15% fees = $140 × 0.85 = $119
    • Acquisition shipping $5 + repair $10 → total cost = $95
    • Net profit = $119 − $95 = $24 → margin = 24 / 95 ≈ 25.3% → shortlist it.

    Common mistakes & fixes

    • Relying on asking price alone — fix: always subtract fees and realistic sale price.
    • Underestimating repairs — fix: use conservative repair ranges and update after each sale.
    • Not tracking outcomes — fix: log hits/misses; tune fee and repair assumptions weekly.

    Copy-paste AI prompt (use as-is)

    You are a resale analyst. Given this listing data, estimate a realistic resale price in 7–14 days on the same marketplace, list likely fees (platform %, payment %), and calculate net profit if bought at the listed price. Provide confidence (low/med/high), list three photo/description red flags to check, and two short seller questions to confirm condition. Listing: title: [title], price: [amount], shipping: [cost], condition: [new/like new/used/damaged], sold-price references: [two recent sold prices], key details: [model, serial, accessories].

    7-day action plan

    1. Day 1: Pick 2 categories, set your margin rule, set alerts.
    2. Days 2–3: Evaluate 30–50 listings using the prompt and log results.
    3. Day 4: Buy 2–3 test items that meet the rules.
    4. Days 5–7: List and sell, record final sale price, fees, time-to-sale. Adjust rules.

    Small experiments beat perfect plans. Use the AI prompt as your first filter, then verify by eye. Track outcomes and adjust — you’ll quickly separate noise from repeatable flips.

    Jeff Bullas
    Keymaster

    Try this in 5 minutes: add a “Silent Risk” column in your client sheet. Flag clients who haven’t engaged in 60+ days and show a 15%+ drop in usage/revenue vs 3 months ago. Sort by this flag and call the top 5 today.

    Why this works: AI can absolutely predict churn, but the win comes when each risk signal triggers one clear, human action. Think thermostat: detect heat, then turn the dial. Keep it simple, measurable, and repeatable.

    What you’ll need

    • A spreadsheet/CRM export with: client_id, signup_date, last_contact_date, last_login/activity_date, monthly_revenue or usage, revenue_3mo_ago or usage_3mo_ago, complaints_last_90d, nps_score (if you have it).
    • An action menu: phone call, 15-min review, personalized email, small credit/bonus, onboarding refresher.
    • Owner and response time for each action (e.g., High risk → call within 48 hours).
    • One column to log outcomes (stayed/churned/upsold/no response) and one for next step/date.

    Step-by-step: from rules to “simple AI”

    1. Build a clear score (RFM-style, 10 minutes)
      • Recency (days since last activity): 0–14 = 0, 15–30 = 1, 31–60 = 2, 61+ = 3.
      • Frequency (uses/logins last 30 days): 10+ = 0, 5–9 = 1, 1–4 = 2, 0 = 3.
      • Monetary/Usage change vs 3 months ago: increase/flat = 0, drop 1–14% = 1, drop 15–29% = 2, drop 30%+ = 3.
      • Sentiment/Support: complaint in 90d = +3; NPS ≤6 = +3; neutral (7–8) = +1; positive (9–10) = 0.
      • Tenure: new (<90 days) = +2 (onboarding risk); 90+ days = 0.
      • Total score (0–14). Buckets: 0–3 Low, 4–7 Medium, 8–14 High.
    2. Map score to a one-line play
      • High (8–14): phone call within 48h + “make it right” plan; manager loop if complaint present.
      • Medium (4–7): personalized email + invite to 15-min review; follow-up in 7 days.
      • Low (0–3): include in next check-in; send value tip or usage summary.
    3. Action matrix by trigger (precision improves results)
      • Recency high (61+ days): “We miss you” check-in + quick booking link; remind 1–2 key benefits.
      • Frequency drop: share a 3-step “get back on track” guide; offer a 10-minute tune-up call.
      • Monetary/usage drop: review fit; propose right-sized plan or add-on to restore value.
      • Negative sentiment: apology, fix the root issue, small goodwill credit if warranted.
      • Early tenure: onboarding refresher + confirm desired outcome and next milestone.
    4. Holdout test (insider trick)
      • Within each bucket, randomly hold out 10% who receive no extra outreach for 30 days.
      • Compare retention of contacted vs holdout. That’s your incremental impact. Keep what moves the needle.
    5. Guardrails (avoid “AI mirages”)
      • Define churn clearly (e.g., canceled contract or 90 consecutive days inactive/no purchase).
      • Use only data available before the churn decision date (no peeking into the future).
      • Exclude clients in collections/legal from outreach automations.
    6. Scale to simple AI (after 30–60 days of logs)
      • Export your scored data with outcomes. Let a no-code model rank risk (top 10% = “red zone”).
      • Keep the same action matrix; you’re just improving who gets contacted first.

    Example (how this looks in practice)

    • Client A: 75 days since last login (3), 0 logins (3), usage down 35% (3), complaint last month (3), tenure 2 years (0) → Score 12 (High) → Same-day apology call; fix ticket; offer 1-month add-on at no cost; schedule success review.
    • Client B: 25 days since last contact (1), 6 logins (1), usage down 18% (2), NPS 7 (1), tenure 8 months (0) → Score 5 (Medium) → Email + 15-min review; share a 3-step usage plan; follow-up in 7 days.
    • Client C: 10 days since last activity (0), 12 logins (0), usage up 5% (0), no complaints (0), tenure 45 days (2) → Score 2 (Low) → Onboarding tip email; set milestone for day 60.

    Common mistakes and quick fixes

    • Chasing one signal: combine 3–4 signals; scores become more trustworthy.
    • Discount-first reflex: fix root causes first; reserve credits for service recovery or proven saves.
    • No control group: always keep a holdout; it shows what truly works.
    • Cluttered playbook: cap to 3 actions per bucket; scripts fit on one page.

    Copy-paste AI prompt

    Act as a customer retention analyst and spreadsheet coach. I will upload a CSV with: client_id, signup_date, last_contact_date, last_login_date, monthly_revenue, revenue_3mo_ago, logins_last_30d, complaints_last_90d, nps_score, outcome_30d (stayed/churned/upsold). Do the following: 1) Propose an RFM-style churn score with exact thresholds and weights that fit these columns. 2) Generate Excel/Google Sheets formulas for each feature and the total score. 3) Define Low/Medium/High buckets and a one-line action for each. 4) Create a trigger→action matrix (recency, frequency, monetary drop, sentiment, early tenure) with phone/email scripts (30 seconds and 50 words). 5) Design a 10% per-bucket holdout test and the metrics to compare (retention uplift, cost per save). 6) List 8 feature-engineering ideas for a future simple AI model, ensuring no data leakage. Return the scorecard, formulas, scripts, and test plan in clear steps I can copy into my sheet.

    1-week action plan

    1. Today: add the RFM columns and the Silent Risk flag; sort and pick top 10 clients.
    2. Day 1–2: run the High/Medium/Low plays; log outcomes and reasons.
    3. Day 3–5: holdout design in place; continue outreach; adjust scripts based on objections heard.
    4. Day 7: review uplift vs holdout; tweak thresholds; set weekly cadence.

    What to expect: clearer priorities within days and measurable retention improvements as you iterate. The model helps you aim; the human follow-up wins the game.

    Jeff Bullas
    Keymaster

    Quick win: In 5 minutes you can generate a 4-frame storyboard for a 15-second ad. Pick a short script, paste the ready-made prompt below into an image-generator, and export the four numbered frames.

    Why this works: AI is a fast visual prototyper. Use it to lock camera angles, poses and color language before you spend time animating. Then refine the best frames for production.

    What you’ll need

    • 15–30 second script or a 4–6 beat shot list.
    • 1 clear reference image for your main character or product (improves consistency).
    • Brand colors and logo file (for final clean-up, not always during first-pass).
    • An image generator with image-to-image/inpainting + a simple editor (Photoshop, Affinity, or free alternatives).

    Step-by-step (do this)

    1. Break the script into 4–6 clear beats. Label them Frame 1, Frame 2, etc.
    2. Write a 1-paragraph visual brief: tone, camera distance, color palette, and must-have brand elements.
    3. Use the prompt below to generate numbered frames. Attach your reference image when possible.
    4. Review and pick the best frame per beat. Use inpainting to fix poses, faces, or logo placement for consistency.
    5. Assemble frames into an animatic with timing and temp audio to check pacing (many video editors let you drop images and set durations).
    6. Polish one or two frames in an editor for logo accuracy and clean edges, then hand to animation or export for motion compositing.

    Copy-paste prompt (use as-is, change details)

    Frame 1: A friendly middle-aged woman holding a smartphone, medium close-up, 16:9, flat vector style with soft shadows, warm morning light, brand palette: teal (#0AA), soft coral (#F88), neutral gray background, camera slightly angled 15 degrees. Simple kitchen counter in background, minimal props. Clean composition, no text. –v 1

    Frame 2: Over-the-shoulder shot of phone screen showing the app opening, close-up, same style and colors, clear readable UI placeholder, shallow depth of field, bright highlight on phone. –v 1

    Frame 3: Medium shot of the woman smiling, product benefit moment, soft cinematic rim light, same character details and outfit as Frame 1, keep facial expression consistent. –v 1

    Frame 4: Wide shot showing logo reveal on the right, woman pointing, call-to-action space left, bold clear shapes, high contrast for social feed. Number frames 1-4 in the filenames.

    Example

    Script beat: 1) Open app, 2) Swipe to feature, 3) Benefit moment, 4) CTA/logo. Use the prompt above, attach one reference photo of your actor to keep features consistent.

    Common mistakes & fixes

    • AI drift on character look — Fix: lock a reference image and use image-to-image with the same seed across frames.
    • Busy backgrounds — Fix: request “minimal background” or export with transparent background for layering later.
    • Wrong logo or text — Fix: place logo manually in an editor to ensure brand accuracy.

    Action plan (next 60–90 minutes)

    • 5 minutes: Run the 4-frame prompt with your reference.
    • 30 minutes: Pick best frames and inpaint two frames for consistency.
    • 30–60 minutes: Assemble an animatic and test pacing with voice or music.

    Reminder: Use AI for speed and ideas, not the final logo placement. Lock the visual language early, then polish selectively. Small iterations beat perfect first drafts.

    Jeff Bullas
    Keymaster

    Great question — focusing on returns, warranties and repairs is one of the fastest ways to cut cost and boost customer trust. Nice to see you prioritise the customer experience.

    Here’s a simple, practical playbook you can start this week using AI to automate triage, routing and status updates.

    What you’ll need

    • Product list with serial/sku and warranty rules (spreadsheet or database)
    • Customer intake form (web form or email template) that collects order#, photo, issue, serial#
    • Simple ticket system or CRM (even a spreadsheet or Trello will do)
    • An AI assistant (Chat-style AI or API) and a no-code automation tool to connect form → AI → ticket

    Step-by-step (do this first)

    1. Map the current flow: customer request → inspection → decision (repair, replace, refund) → completion.
    2. Create a standard intake form with required fields: order#, date, serial, photos, short description.
    3. Use AI to triage incoming requests: warranty valid? probable fault category? urgency?
    4. Auto-label ticket and route: repairs team, return-authorisation, or refund queue.
    5. Send an automated, human-tone reply with next steps and expected timeline.
    6. Log resolution, capture root cause, and feed data back to improve triage rules.

    Copy-paste AI prompt (use this in your automation)

    “Customer submitted a return/repair request. Fields: order#: {ORDER}, purchase_date: {DATE}, serial#: {SERIAL}, photos: {PHOTO_LINK}, description: {DESCRIPTION}. Based on warranty start date and our policy (warranty_period_months = 12), classify the request as: ‘In Warranty – Repair’, ‘In Warranty – Replace’, ‘Out of Warranty – Quote Repair’, or ‘Refund Requested’. Provide: short reason (one sentence), suggested next action, required parts/tools, and an estimated time to resolution. If photos show external damage, flag as ‘Possible Abuse’. Reply in 3 short sentences.”

    Worked example

    Customer submits: order# 1234, serial ABC-999, bought 10 months ago, photo shows device with a non-functioning button. AI triage returns: “In Warranty – Repair. Likely faulty switch; request bench test. Send pre-paid return label and estimate 5–7 business days.” Automation then creates a repair ticket, emails the customer the label and estimated date, and notifies the repair team.

    Do / Do not (quick checklist)

    • Do require serial/order# — it speeds decisions.
    • Do ask for a clear photo and short problem description.
    • Do keep replies human and time-bound.
    • Do not ask for unnecessary data — it causes drop-off.
    • Do not rely on AI alone for safety-critical checks — use human review for edge cases.

    Common mistakes & fixes

    • Mistake: Vague prompts. Fix: use the exact prompt above and include policy data.
    • Mistake: Missing photos/serials. Fix: make fields required and give examples of good photos.
    • Mistake: No SLA. Fix: promise and track clear timelines.

    7-day action plan

    1. Day 1: Map process and list required fields.
    2. Day 2–3: Build intake form and simple ticket board.
    3. Day 4: Connect AI to triage and test with 10 sample cases.
    4. Day 5: Create templates for customer replies and labels.
    5. Day 6–7: Run pilot, collect feedback, and update rules.

    Start small, measure time saved and customer satisfaction, then scale. If you want, tell me one product and your warranty length and I’ll draft the exact triage rules for you.

    All the best,Jeff

    Jeff Bullas
    Keymaster

    Nice starting point — that 5-minute churn-rate check is exactly the quick win that kickstarts everything. Now let’s turn that insight into predictable retention actions you can run this week.

    Short context: predicting churn is useful only when it leads to simple, repeatable actions for your team. Keep the tech light at first and focus on clarity: who to contact, what to say, and how to measure.

    1. What you’ll need
      • A spreadsheet or CRM export with: client ID, signup date, last contact date, product, monthly revenue or balance, recent activity (last login/visit), complaints/support tickets, NPS or satisfaction if available.
      • A short action menu (phone call, personalized email, special offer, schedule review) and one responsible person for each.
    2. Step-by-step (do this in the next 90 minutes)
      1. Calculate base churn rate (you already did). Use it as your baseline metric.
      2. Create a simple rule-based risk score in your sheet. Example points: no contact in 6+ months = +3, revenue drop 20%+ = +2, recent complaint = +3, NPS <=6 = +4.
      3. Sum points and bucket: 0–2 Low, 3–5 Medium, 6+ High.
      4. Attach actions: High = phone call within 48 hours + retention offer; Medium = personalized email + 1-week follow-up; Low = include in next scheduled check-in.
      5. Record outcome for each contact (stayed, churned, upsold) and compare to baseline weekly.
    3. Example
      • Client A: no contact 7 months (+3), revenue down 30% (+2), total = 5 → Medium → send personalized email and offer a 15-minute review meeting.
      • Client B: NPS 4 (+4), complaint last month (+3), total = 7 → High → phone call same day and manager involvement.
    4. Common mistakes & fixes
      • Relying on one signal (mistake): fix by combining 3–4 signals to reduce false positives.
      • Actions too complex (mistake): fix by limiting to 2–3 repeatable responses.
      • No measurement (mistake): fix by tracking outcomes and running quick A/B tests (call vs email) for top 10% risk group.
    5. Next steps (30/60/90 day plan)
      1. 30 days: run the rule-based scoring and log outcomes weekly.
      2. 60 days: refine point weights based on what worked; automate flagging in your CRM.
      3. 90 days: consider a simple predictive model (no-code or vendor) to learn patterns — but keep actions unchanged until validated.

    Copy-paste AI prompt (use this with a chat assistant or no-code tool)

    Act as a customer retention analyst. I will upload a CSV with columns: client_id, signup_date, last_contact_date, monthly_revenue, revenue_3mo_ago, last_login_date, complaints_last_12mo, nps_score. Suggest 6 feature-engineering ideas, create a simple risk scoring approach, propose three prioritized retention actions tied to risk levels, and outline an A/B test to measure uplift. Also draft a 30-second phone script for high-risk clients and a 50-word personalized email template for medium-risk clients.

    Action plan right now: build the spreadsheet score today, assign owners, make 10 targeted contacts this week, and measure results next week. Keep it small, human, repeatable — the tech follows the process, not the other way around.

    Jeff Bullas
    Keymaster

    Quick yes — and one small correction: export a week, but when you paste “10 rows” into the AI make those 10 representative rows (different days, tasks, billable vs non-billable). Also make sure duration units are consistent (hours or minutes) and anonymize client names. That prevents misleading patterns.

    Why this matters

    AI will find patterns fast, but it needs clean, consistent input. The goal: one clear experiment you can run for 7 days and measure. Don’t chase perfect analysis — chase a practical change.

    What you’ll need

    • One-week export (CSV/Excel) — 10–50 rows; start with 10 representative rows.
    • Columns: date, project/client (anonymize), task (use simple labels), duration_hours (numeric), billable (yes/no), notes.
    • AI chat or editor where you can paste rows and a prompt.

    Step-by-step (do this now)

    1. Export 7 days. Remove or anonymize client names and confirm duration units.
    2. Pick 10 rows that show variety: meetings, email, deep work, admin, billing.
    3. Paste the 10 rows into an AI chat with the prompt below.
    4. Ask for: top 3 time drains, 2 tasks to delegate/automate, and one 7-day experiment to reclaim 3–5 hours with exact steps.
    5. Choose one experiment. Block it on your calendar and set a simple measurement rule (daily log or billable% check).
    6. Run 7 days, keep a 2-line daily log, then re-export and repeat the analysis.

    Example outcome you can expect

    • AI finds 5 hours/week lost to short ad-hoc meetings and suggests batching two 60-minute meeting blocks and replacing status check-ins with a 1-paragraph email.
    • AI flags recurring invoice creation as automatable and suggests a template + an automation to send invoices on Fridays.
    • Suggested 7-day experiment: enforce two focused meeting blocks + limit meetings to 25 minutes. Expected: 2–4 hours reclaimed that week and a small bump in billable% (actual results vary).

    Common mistakes & fixes

    • Do NOT paste mixed units (hours + minutes). Fix: convert all to hours.
    • Do NOT use vague task names. Fix: map to a tiny taxonomy: Email, Meetings, Deep Work, Admin, Billing.
    • Do NOT run many experiments at once. Fix: one change for 7 days.

    7-day action checklist

    1. Day 1: Export 7 days, pick 10 rows, run the AI prompt below.
    2. Day 2: Pick one experiment, calendar-block it, tell stakeholders if needed.
    3. Days 3–7: Run the experiment. Each day log: 1) what I changed, 2) minutes reclaimed.
    4. Day 8: Re-export and rerun AI prompt; compare metrics and decide next step.

    Copy-paste AI prompt (use as-is)

    Here are 10 rows of my time-tracking (columns: date, project/client [anonymized], task, duration_hours, billable, notes). Please analyze and give me: 1) top 3 patterns/time drains with hours-per-week estimates, 2) two practical tasks I can delegate or automate and exactly how, 3) one clear 7-day experiment designed to reclaim 3–5 hours (step-by-step), 4) one simple daily metric to track, and 5) a 2-line daily log template I can copy. Be concrete and action-focused.

    Do the experiment. Measure. Repeat.

    Jeff Bullas
    Keymaster

    Hook — quick nudge: Nice sprint. That three-review aggregation above the CTA is one of the fastest, highest-leverage moves you can make. Do it right and you’ll see clicks nudge up within hours.

    Why this tiny habit wins: it turns scattered praise into a single credible promise. Visitors see a clear benefit + a small qualifier instead of fuzzy praise — that reduces hesitation and increases clicks.

    What you’ll need

    • A short set of reviews (3–10) for the theme you’ll test.
    • A spreadsheet with columns: review, rating, persona tag, consent flag, specificity score (1–5).
    • Timer for sprint work (10–15 minutes).
    • One quick QA check (colleague or checklist) to verify numbers and consent.

    Step-by-step sprint (under 30 minutes)

    1. 10-minute cluster sprint — scan and tag themes (speed, savings, support). Choose the theme with the clearest specifics.
    2. 5-minute pick — select 2–3 high-specificity reviews. Copy one short verbatim phrase to preserve voice.
    3. 5-minute ladder write — create Levels 1–4 quickly:
      1. Level 1: 10–14 word verbatim quote (emotional anchor).
      2. Level 2: one quantified line tied to one review (number + timeframe if present).
      3. Level 3: aggregated proof with a qualifier (“among recent customers” or “in our beta”).
      4. Level 4: two-sentence mini case (situation → change → result).
    4. 2-minute QA — confirm consent, redact any PII, and ensure metrics are accurate (or use ranges/qualifiers).
    5. 5-minute deploy & test — place Level 3 above the primary CTA and Level 2 under the button. Swap copy (or run an A/B) for 24–72 hours to check early signals.

    Worked example (copy this pattern)

    • Review A: “Live in 30 minutes — we launched the same day and stopped chasing issues.”
    • Review B: “Set up in under 30 minutes — saved us days of back-and-forth.”
    • Review C: “From signup to first result in half an hour — it just worked.”
    • Level 1 (Verbatim): “Live in 30 minutes — we launched the same day.”
    • Level 2 (Quantified): “Live in ~25–30 minutes — same-day launch for many customers.”
    • Level 3 (Aggregated): “Among recent customers, setup took about 30 minutes and enabled same-day launch.”
    • Level 4 (Mini case): “Before: launches dragged for days. After: live in ~30 minutes — same-day results and less firefighting.”

    Common mistakes & fixes

    • Mistake: Aggregating different use cases. Fix: Cluster by persona/use-case first.
    • Mistake: Using exact numbers from tiny samples without qualifiers. Fix: Use ranges or labels like “among recent customers.”
    • Mistake: Removing all customer voice. Fix: Keep one short verbatim phrase in every block.

    Copy-paste AI prompt (use as-is)

    “You are a trustworthy copy editor. Using these customer reviews: [PASTE 2–3 REVIEWS], do the following: 1) Extract the shared outcome; 2) List any consistent numbers/timeframes; 3) Identify one short verbatim phrase to keep. Then produce a Proof Ladder: A) Level 1: a 10–14 word verbatim quote; B) Level 2: a single quantified proof line tied to one review; C) Level 3: an aggregated proof line that combines the reviews with a clear qualifier (e.g., ‘among recent customers’); D) Level 4: a two-sentence mini case (situation, change, result). Keep tone: clear, non-salesy, specific. Do not invent numbers. If numbers conflict, use a range or a non-numeric qualifier.”

    Short action plan — next 48 hours

    1. Export last 60–90 days of reviews and tag by theme (30–60 minutes).
    2. Run one 30-minute sprint: cluster, pick 3 reviews, build a Proof Ladder, QA, and deploy Level 3 above CTA.
    3. Monitor clicks for 24–72 hours and compare to baseline; keep the winner and repeat weekly.

    Closing reminder: little, consistent wins beat occasional big overhauls. Ship one proof block this morning — iterate next week. Keep one human in the loop for numbers and consent, and you’ll build a persuasive library without a huge toolbelt.

    Jeff Bullas
    Keymaster

    Nice point — you’re right: narrative is the efficiency tool. Short talks demand a single claim and a clear CTA. I’ll add a practical, do-first recipe to turn that into a finished 5-slide delivery fast.

    Why this matters (quick): In a short talk your job isn’t to teach everything — it’s to move people to one measurable action. A tight 3-act arc (hook → conflict → resolution) gives permission to cut the rest.

    What you’ll need:

    • A one-line audience description (role + their top pain + desired outcome).
    • Talk length in minutes and target KPI (e.g., 15 signups).
    • 5–7 proof points (stats, short case, quote).
    • Phone timer and a colleague to run the talk for feedback.

    Step-by-step — do this now (45–90 minutes):

    1. Use the AI prompt below to generate a 3-act, 5-slide outline (hook, core claim, 3 beats, CTA).
    2. Pick the one-sentence core message from the AI output — this is your north star.
    3. Choose three supporting beats. Attach one proof and one visual idea to each (chart, quote, quick case).
    4. Write five slide titles and one-line speaker notes: Hook (15s), Beat1 (45–60s), Beat2 (45–60s), Beat3 (45–60s), CTA (20–30s).
    5. Do a timed run. Cut anything that doesn’t directly support the core message or CTA. Repeat until under time by 5–10 seconds.

    Copy-paste AI prompt (use exactly or edit brackets):

    “Create a persuasive 3-act narrative for a short talk. Audience: [job title and top pain]. Length: [minutes]. Goal: [single measurable outcome]. Deliver: 1) a 15-second hook, 2) a one-sentence core message, 3) three supporting points with one proof point and a simple visual idea each, 4) a 20-second closing call to action tied to the goal, and 5) slide cues (title + one-line speaker note) for 5 slides. Keep language direct and simple; make the CTA a single, measurable ask.”

    Worked example (7-minute talk for small business owners on getting repeat customers):

    • Hook: “Most repeat sales don’t happen by luck — they happen because you treat first-time buyers like future fans, not transactions.”
    • Core message: Focused post-purchase follow-up increases repeat sales faster than chasing new traffic.
    • 3 beats: (1) Simple welcome sequence (proof: +15% repeat rate), (2) One low-cost offer in week 3 (proof: 8% conversion), (3) Ask for feedback + referral (proof: 12% referral increase).
    • CTA: “Sign up at the end for a 3-email template bundle — I’ll email your first template today.”

    Common mistakes & fixes:

    • Too many points — fix: drop to three and tie each to the single claim.
    • Stats without implication — fix: add one sentence: “Which means for you…” after each stat.
    • Reading the AI verbatim — fix: rewrite the hook in your voice and add one personal sentence.

    3-day quick action plan:

    1. Day 1: Run the prompt, pick core claim, draft 5 slides.
    2. Day 2: Create 3 visuals, rehearse twice with timer, tighten language.
    3. Day 3: Peer run or record, finalize CTA mechanism, deliver.

    Small, deliberate practice wins. Use the AI to shape the arc — you add the voice, the pause, and the ask.

    — Jeff

    Jeff Bullas
    Keymaster

    Your 48 px test is gold. It forces honesty: either the shape reads instantly or it doesn’t. Let’s add a couple of pro shortcuts so AI gives you cleaner silhouettes, stronger contrast, and app-store-ready exports without endless tweaking.

    Do / Don’t checklist

    • Do start with one unmistakable shape (bolt, leaf, key, letterform) and keep 10–20% padding around it.
    • Do use 1–2 high-contrast colors and a greyscale pass to verify readability.
    • Do export a 1024×1024 master, then scale down to 512, 180, 120, 48 px.
    • Do test on both light and dark backgrounds and create an inverted color variant.
    • Don’t use thin strokes; prefer solid fills or thick strokes that survive at 48 px.
    • Don’t add tiny highlights, inner shadows, or tiny gradients—they muddy at small sizes.
    • Don’t rely on text or initials unless the letterform is a bold monogram.

    Insider trick: the “Keyline Ring”

    • Add a subtle 2–3% border (a solid ring) around the shape at 1024 px scale. It separates your mark from busy wallpapers without visible clutter. Keep it the darker of your two colors.
    • At 1024, a 14–18 px ring thickness usually survives down to 48 px.

    What you’ll need

    • One-line brief (symbol + mood).
    • Any AI image/logo tool plus a basic editor (crop/resize/export PNG/SVG).
    • A simple mock screen (take a phone screenshot) to preview real-world size.

    Step-by-step: the Icon DNA sprint (20 minutes)

    1. Define DNA (3 minutes): Symbol + mood + color vibe (e.g., “sleep — calm & trustworthy = crescent moon + rounded corners, deep blue + soft white”).
    2. Generate (5 minutes): Run the prompt below for three square options with transparent backgrounds and SVG if possible.
    3. Small-size audit (4 minutes): Place each at 1024, 180, 48 px on a light and dark tile. Do the squint test and a greyscale pass. Kill anything that blurs.
    4. Refine (5 minutes): Add 10–20% padding, test rounded corners and a squircle variant, thicken strokes or convert to solid fills, add a subtle keyline ring if needed.
    5. Export pack (3 minutes): PNG 1024, 512, 180, 120, 48; plus SVG. Name consistently: appicon_v1a_dark.png, appicon_v1a_light.png, etc.

    Copy‑paste AI prompt (robust)

    “Design three square app icon concepts for [your one-line brief]. Constraints: one bold, single-symbol silhouette; 1–2 high-contrast colors; no small text; no fine details; transparent background. Provide: (1) 1024×1024 and 512×512 PNGs, (2) an SVG vector with clean paths, (3) a rounded-corner and a squircle variant, (4) a light-background and a dark-background version, (5) an optional subtle 2–3% keyline ring for separation. Prioritize clarity at 48×48 pixels.”

    Refinement prompt (use after you pick a favorite)

    “Refine this icon: keep the silhouette identical but simplify to solid fills, add 10–20% padding, ensure contrast works in greyscale, and produce four exports—primary colors (light background), inverted colors (dark background), monochrome solid, and monochrome outline. Deliver 1024 and 48 px PNGs and the SVG.”

    Worked example

    • Brief: “sleep tracking — calm & reliable = crescent moon + star, rounded corners, deep navy + white.”
    • Expect: A crescent moon as the main shape; a single star or dot (not a cluster); navy background, white moon; or white background, navy moon.
    • Refine: Drop extra sparkles. Make the moon a solid fill with a gentle curve. Add a 2% navy keyline ring on the white-background version for separation.
    • Test: At 48 px, you should still see the moon gap distinctly. If it closes, open the crescent or thicken the inner curve.

    Advanced guardrails that save time

    • Grid: Work on a 1024×1024 square. Keep the main shape inside an 820–860 px safe area. Corner rounding: preview both 18–22% radius and a squircle mask.
    • Stroke math: If you must use strokes, test at 1024 with 16–24 px. Below that, convert to filled shapes.
    • Color sanity: If greyscale looks mid-grey on both elements, increase contrast or switch to a light/dark pair. Keep a flat color option; gradients can band at small sizes.
    • Negative space test: Invert the shape (cut-out on a color block). Keep whichever reads faster at 48 px.

    Mistakes & fixes

    • Busy background: If wallpapers swallow the icon, add the keyline ring or use the inverse color version.
    • Symbol feels generic: Combine two primitives (e.g., bolt + check) but merge them into one fused silhouette.
    • SVG is messy: Ask the AI for “clean, minimal paths with no unnecessary nodes.” If the file is heavy, simplify paths before export.

    Action plan (30 minutes)

    1. Write your one-line brief and pick two color options (primary + inverted).
    2. Run the robust prompt for 3 concepts. Discard anything that fails the greyscale or 48 px test.
    3. Refine your best pick with the second prompt. Add padding, corner mask, and the keyline ring if needed.
    4. Export the pack (PNG sizes + SVG) and preview on a real phone screenshot.
    5. Save v1 and create a reversed-color v1R. You now have a ready A/B pair for store assets.

    Closing thought

    Clarity beats cleverness. If your icon snaps into recognition at 48 px, you’ve done the hard part. Use AI to explore fast, then lock in a simple silhouette, strong contrast, and clean exports. That’s a professional workflow in under an hour.

    Jeff Bullas
    Keymaster

    Quick win (2 minutes): open any birthday event, paste three Voice Card lines you use, and save. Next reminder, paste the facts into the prompt below and hit generate — you’ll have a send-ready message in seconds.

    Nice point about the Voice Card — it’s the simplest way to keep tone consistent. Here’s a compact, practical follow-up that turns that idea into a repeatable pipeline you can use this week.

    What you’ll need

    • A calendar app that supports reminders (phone or web).
    • A simple place to store 2–3 facts per person (contact notes, calendar event notes, or a private spreadsheet).
    • An AI chat tool you trust (the one you already use is fine).

    Step-by-step setup

    1. Create a birthday event and add two reminders: 7 days and 1 day before.
    2. In the event notes save: Name, one Memory/Hobby, Recent update, Preferred channel (Text/Email/Card/LinkedIn), and your Voice Card (3 phrases you use / 3 you don’t).
    3. Save the prompt below as a snippet in your phone or a note app so you can paste it quickly when a reminder fires.
    4. When the 7-day reminder appears, paste the facts into the prompt, ask for 2–3 tone options, pick one, tweak a line (10–20%) and schedule or send.
    5. Use the 1-day reminder only for last-minute tweaks or a quick text.

    Copy-paste AI prompt (use as-is)

    “You are helping me write birthday messages in my voice. My Voice Card: I often say [3 phrases you use]. I avoid [3 phrases you don’t use]. Keep language natural, warm, and concise. Do not invent facts. Using: Name = [Name], Memory/Hobby = [Memory], Recent update = [Update], Channel = [Text/Email/Card/LinkedIn], write 3 options: (1) warm, (2) playful, (3) professional. Provide a 1–2 sentence version for text and a 2–4 sentence version for a card. End with a simple sign-off if a card. Offer one optional emoji for text (max one).”

    Worked example

    • Notes: Anna — loves cycling; just promoted to Senior Manager; channel: Text; Voice Card do: “so proud of you”,”cheers”,”hope it’s a great one”; don’t: “bestie”,”yo”,”blessed”.
    • Paste into the prompt above. One result might be: “Hi Anna — so proud of you on the promotion! Hope you get a celebratory ride this week. Wishing you a fantastic year ahead. — [Your name]”

    Mistakes & fixes

    • Outdated facts — add a yearly “Review” reminder to update notes.
    • Messages sounding generic — force one unique detail in every note and ask AI to lead with it.
    • Privacy worry — keep sensitive data out of the AI prompt; use first names and neutral facts only.

    3-step one-week action plan

    1. Day 1: Create your Voice Card and save the prompt as a snippet.
    2. Day 2: Add or update 10 important birthdays with 2–3 facts each and reminders.
    3. Day 3: Run the prompt for the next upcoming birthday and schedule the message.

    Small habit. Big relationship ROI. Aim to draft one birthday this week — you’ll see how fast it becomes part of your routine.

Viewing 15 posts – 1,321 through 1,335 (of 2,108 total)