Win At Business And Life In An AI World

RESOURCES

  • Jabs Short insights and occassional long opinions.
  • Podcasts Jeff talks to successful entrepreneurs.
  • Guides Dive into topical guides for digital entrepreneurs.
  • Downloads Practical docs we use in our own content workflows.
  • Playbooks AI workflows that actually work.
  • Research Access original research on tools, trends, and tactics.
  • Forums Join the conversation and share insights with your peers.

MEMBERSHIP

HomeForumsPage 32

aaron

Forum Replies Created

Viewing 15 posts – 466 through 480 (of 1,244 total)
  • Author
    Posts
  • aaron
    Participant

    5-minute win: Pick one target keyword, open a private browser tab, copy the H2s from the top 3 results, paste them into your AI, and ask: “What topics are missing, what intent do these pages serve, and what 5 H2s would make a clearly better page for buyers?” You’ll walk away with a sharper outline immediately.

    The problem: AI can list keywords and draft briefs, but it guesses difficulty and misses what the live SERP actually rewards. That leads to generic content and soft rankings.

    Why it matters: Small sites win by precision, not volume. Matching search intent and offering a better angle than page-one results moves impressions, clicks, and leads fast — without buying tools or spending weeks on research.

    Lesson: Treat AI as a force-multiplier for three tasks: 1) turning your business goal into intent-led keywords, 2) compressing SERP analysis into a one-page snapshot, and 3) producing a brief that’s differentiated on angle, evidence, and internal links.

    The playbook (7 steps, practical and fast)

    1. Clarify goal and audience (3 mins)
      • Write one sentence: audience + outcome. Example: “Homeowners over 40 comparing heat pump brands to cut energy bills.”
      • AI prompt to anchor intent: “Given [AUDIENCE] wants [OUTCOME], list the top 4 intents behind searches on [TOPIC]: informational, commercial, transactional, local.”
    2. Generate keyword set the right way (5–10 mins)
      • Ask for 25 keywords split by intent, with long-tails and ‘for [audience]’ variants. Don’t accept difficulty yet.
      • Paste into a sheet with columns: Keyword, Intent (AI), Notes.
    3. Reality-check with “SERP Snapshot Scoring” (10 mins)
      • For your top 5 keywords, open a private tab and scan top 5 results.
      • Score each keyword (0–3) on three items: Intent fit to your goal, Competitor strength (brands .org/.gov/news = tough), and Business value (likelihood to convert). Sum for a quick KOB-style score (0–9).
      • Pick one primary (7–9 score), one backup (5–7), one easy win (4–6).
    4. Extract gaps you can own (5 mins)
      • Copy the H2s and FAQs from the top 3 results. Ask AI what they missed: price ranges, comparisons, local angles, step-by-step, calculators, mistakes, or real-world examples.
      • Decide the differentiator you’ll lead with (comparison table, cost ranges, checklist, or first-30-days plan).
    5. Build a brief that converts (10–15 mins)
      • Use the prompt below to produce a brief with: working title, 5–7 H2s (each with a 1–2 sentence purpose), word count range, meta description options, 5 FAQs, and a “Proof pack” (stats, examples, quotes to source).
      • Expectation: a clear outline that is visibly better than page one and aligned to buyer intent.
    6. Map internal links (5 mins)
      • List 5–10 relevant pages on your site. Ask AI where each should link from and the anchor text to use (plain-language, descriptive).
      • Add 2 outbound links to authoritative resources to increase trust.
    7. Publish, test titles, iterate (ongoing)
      • Publish, then test one new title/meta variant after 2 weeks if CTR is weak.
      • Add one update in week 4: an extra example, a mini table, or a short checklist.

    Copy-paste prompts (ready to use)

    • Keyword set + SERP cues“I’m targeting [TOPIC] for [AUDIENCE] to achieve [BUSINESS GOAL]. Provide 25 keyword ideas grouped by intent (informational, commercial, transactional, local). For each, add: 1) long-tail variant, 2) likely content format Google rewards (guide, comparison, checklist, FAQ), 3) likely decision factors (price, features, risks). Keep it concise.”
    • SERP gap finder (paste headings)“Here are H2/H3s from the top 3 results for [KEYWORD]: [PASTE]. Identify the missing angles a buyer would want. Propose a better outline with 6 H2s, each with a one-sentence purpose, and list a ‘Proof pack’ (stats to cite, examples to include, simple comparison table).”
    • Conversion-focused brief“Create a content brief for [PRIMARY KEYWORD] for [AUDIENCE]. Include: 1) working title options (3), 2) 5–7 H2s with purpose lines, 3) word count range, 4) meta description options (under 155 chars), 5) 5 FAQs phrased as questions, 6) internal link suggestions for these URLs [PASTE YOUR URLS OR TITLES], 7) a 3-bullet ‘Call-to-Action plan’ that feels helpful, not salesy.”

    Metrics that matter (simple, no paid tools)

    • Indexation speed: page discovered within 72 hours.
    • Impressions: +30–100% in 4 weeks for target queries.
    • Average position: reach positions 10–20 by week 4; aim for 4–10 by week 8.
    • CTR: improve title/meta to hit 3–6% on commercial intents; 2–4% informational.
    • Engagement: time on page 1:30–3:00; scroll to 75% for at least 30% of readers.
    • Leads: 1 clear CTA clicked by 1–3% of visitors for commercial pages.

    Common mistakes and fast fixes

    • Chasing volume over intent → Filter for commercial intent if your goal is leads/sales.
    • Generic outlines → Add a differentiator: price ranges, comparison table, checklist, or first-30-days plan.
    • Overwriting without proof → Add a Proof pack: stats, example, quote, or mini case.
    • No internal links → Add 3–5 internal links with descriptive anchors; 1–2 authoritative outbound links.
    • Waiting months to tweak → If CTR is low after 14 days, test a new title/meta focusing on outcome and specificity.

    One-week action plan

    1. Day 1: Run the Keyword set prompt. Shortlist 3 keywords using the 0–9 score (intent, competition, value).
    2. Day 2: For the primary keyword, do the SERP gap finder with pasted H2s. Lock the differentiator.
    3. Day 3: Generate the conversion-focused brief. Add internal link plan from your existing pages.
    4. Day 4: Draft the content using the brief. Insert comparison table, checklist, and FAQs.
    5. Day 5: Edit for clarity and add Proof pack elements. Publish.
    6. Day 6: Submit URL for indexing. Add to your performance report: impressions, position, CTR baseline.
    7. Day 7: Review early signals. If CTR is under 2%, craft an alternative title/meta for week-2 test.

    Set expectations: This approach won’t guarantee page-one overnight. It will remove guesswork, get you indexed fast, and surface steady gains (impressions → clicks → leads) in 2–8 weeks, with minimal tools.

    Your move.

    aaron
    Participant

    Quick win (under 5 minutes): open Google Maps or Waze, enter your usual route, then compare the suggested travel time for leaving now vs. 30 minutes later — record which is faster. That single check will often save 10–20 minutes immediately.

    Good question — thinking about predicting the best time windows for errands is the right framing. You want outcomes: predictable travel time, fewer interruptions, less fuel and stress.

    The problem: live traffic is noisy. Relying on one-off estimates or gut feel wastes time and adds uncertainty to your day.

    Why it matters: reducing unpredictable commute/errand time scales directly to saved hours per week, better scheduling of appointments, and fewer late arrivals.

    What works (short lesson): combine quick live checks with simple historical patterns. Traffic follows routines: morning peaks, school runs, and local events. You don’t need a full engineering team — a 1–2 week dataset plus a basic AI prompt will reveal recurring windows reliably.

    1. What you’ll need: your phone (Maps/Waze), a simple spreadsheet (Excel/Google Sheets), and access to an AI assistant (ChatGPT or similar).
    2. Collect (how): for 7–14 days, log: date, day of week, departure time, origin/destination, travel time (minutes), and note weather/events. Use the spreadsheet template: Date | Day | Time | Origin | Destination | Minutes | Weather | Event.
    3. Analyze (how): paste the spreadsheet rows into the AI assistant and run the prompt below. It will identify 1-hour windows with the lowest average travel time and confidence levels.
    4. Act (how): block recommended low-traffic windows in your calendar or set a reminder to leave earlier/later. Re-run analysis monthly or after major schedule changes.

    Copy-paste AI prompt (use as-is):

    “I have this dataset with columns: date, day_of_week, departure_time (HH:MM), origin, destination, travel_time_minutes, weather, event_flag. Please analyze and output: 1) for each day_of_week, the top 3 one-hour windows with the lowest average travel_time and sample sizes; 2) confidence score (low/medium/high) based on sample size and variance; 3) simple rules like ‘avoid 17:00–18:00 Thu’ and suggested alternative windows; 4) recommended minimum sample size for reliable prediction. Explain results in plain English.”

    Metrics to track:

    • Average minutes saved per trip
    • Percentage of errands completed within expected time
    • Total minutes saved per week
    • Sample size used for each window

    Common mistakes & fixes:

    • Mistake: trusting single-day data. Fix: collect 7–14 days (14+ ideal).
    • Mistake: ignoring special events/weather. Fix: flag and exclude anomalies from baseline.
    • Mistake: using live traffic only. Fix: combine historical patterns with live checks for final decisions.

    1-week action plan:

    1. Day 1: Do the 5-minute quick-win check and set up the spreadsheet.
    2. Days 2–7: Log every errand/run (time and minutes). Flag weather/events.
    3. Day 7: Paste collected rows into the AI prompt above and get recommended windows and rules.
    4. Day 8: Implement calendar blocks or reminders for the recommended windows and re-check results after two weeks.

    Your move.

    aaron
    Participant

    Quick win (5 minutes): Paste a 600–1,200 word transcript into the prompt below and ask for five 45-second scripts — you’ll get usable hooks you can test today.

    A useful point from above: Yes — AI speeds research, scripts and captions. Agreed. I’ll add what matters next: predictable KPIs and a simple workflow so you turn clips into revenue, not just views.

    The problem

    People churn out clips but don’t track what converts. Views without conversion = wasted time. You need a repeatable way to find moments that prompt action and a measurement loop to improve.

    Why this matters

    Short clips scale attention quickly — but only monetizable when tied to a clear CTA and funnel. Optimizing for watch-time and CTA clicks reduces acquisition cost and creates predictable ROI.

    My experience / lesson

    I run this on client podcasts and long-form webinars: pick the best 2 minutes, test 3 hooks, double down on the winner. Results: 20–40% lift in CTR to signup pages within two weeks.

    What you’ll need

    • Source asset (article, podcast transcript, or full video)
    • Auto-transcription tool (or manual transcript)
    • AI writing tool (ChatGPT-style)
    • Simple editor (mobile app or desktop) for cutting and captions
    • Thumbnail creator and scheduler

    Step-by-step (clear, fast path)

    1. Pick one high-value asset (webinar, podcast episode) — highest relevance to your product/service.
    2. Create a clean transcript (auto-transcribe, 5–15 minutes edit).
    3. Run the transcript through the AI prompt below — get 5 scripts with hooks and CTAs.
    4. Select 2 scripts: one emotional hook, one practical hook.
    5. Clip or re-record to match script; add captions and a clear verbal CTA within last 3 seconds.
    6. Publish native-format clips across platforms; A/B thumbnails for the first 48 hours.
    7. Boost the top performer with a small ad spend and link to a one-step landing page.

    Copy-paste AI prompt (use as-is)

    Paste the transcript below. Create five 45–60 second video scripts optimized for Instagram Reels, TikTok, and YouTube Shorts. For each script include: 1) a 3-second hook, 2) three concise points, 3) one direct call-to-action (sign-up, link, DM), 4) suggested on-screen captions, and 5) a thumbnail idea. Keep language second person and action-focused. Number each script.

    What to expect

    Output: 5 ready-to-use scripts, caption lines, and thumbnail ideas. Time: ~10 minutes to generate and another 30–90 minutes to produce each clip depending on editing skill.

    Metrics to track

    • Views and 3–10s retention (hook effectiveness)
    • Average watch time / % watched (content quality)
    • CTR on CTA (conversion effectiveness)
    • CPR / CPL if boosting (cost efficiency)
    • Shares and saves (organic lift)

    Mistakes & fixes

    • Weak hook → Fix: test question, surprise stat or direct benefit in first 3s.
    • Long or messy CTA → Fix: single step CTA (link or DM) and repeat visually.
    • No tracking → Fix: use UTM or short links and measure clicks to the landing page.

    7-day action plan (exact)

    1. Day 1: Pick content + generate transcript.
    2. Day 2: Run AI prompt; pick 2 scripts.
    3. Day 3: Produce first clip; add captions and thumbnails.
    4. Day 4: Publish clips; A/B thumbnail for clip A and B.
    5. Day 5: Promote organically and via email.
    6. Day 6: Boost top performer with small budget; track CPL.
    7. Day 7: Review metrics; scale winning format.

    Your move.

    aaron
    Participant

    Quick win: Paste this one prompt into any AI chat and get 10 tailored interview questions in under 2 minutes (copy-paste prompt shown below).

    Good that there were no prior replies — we get a clean brief and can design a repeatable process.

    The problem: hiring teams waste interview time on generic questions that don’t reveal whether a candidate can actually do the job.

    Why this matters: better-aligned interview questions speed up hiring, reduce bad hires and improve new-hire performance — measurable ROI within the first 90 days.

    What I’ve learned: useful questions come from three inputs: a clear short brief, the role’s seniority, and the 2–3 non-negotiable skills/behaviors. Give an AI those inputs and it reliably produces usable, structured questions.

    1. What you’ll need: a one-paragraph role brief (3–4 sentences), a list of 2–3 must-have skills, candidate seniority (junior/mid/senior), and access to an AI chat (ChatGPT, Claude, etc.) or an internal LLM.
    2. Step 1 — Draft the brief: Write 2–4 sentences: team, purpose, top responsibilities. Expect: a clean short brief in 5–10 minutes.
    3. Step 2 — Run the prompt: Use the copy-paste prompt below. Expect: 8–12 tailored questions split by type (behavioral, technical, culture-fit) in 1–2 minutes.
    4. Step 3 — Calibrate: Ask the AI to rank questions by difficulty and flag which will reveal core skills. Expect: a prioritized list you can use for 30–60 minute interviews.
    5. Step 4 — Build the script: Select 6–8 questions (mix of behavioral + technical + clarification). Add timing: 5 minutes intro, 30–40 minutes questions, 5–10 minutes candidate Q&A.

    Copy-paste AI prompt (use as-is):

    “I have a short role brief: [paste brief]. The role is [junior/mid/senior]. Must-have skills: [skill 1], [skill 2], [skill 3]. Generate 10 interview questions split into three sections: 4 behavioral questions that reveal decision-making and culture fit; 4 technical/skills questions that can be scored; 2 situational/problem-solving questions. For each question, add a one-line rubric: what a strong answer contains (specific signals to look for). Keep language simple and practical.”

    Metrics to track:

    • Interview-to-offer rate
    • Time-to-hire (days)
    • New-hire 90-day performance score
    • Interviewer confidence (qualitative, post-interview)

    Common mistakes & fixes:

    • Using vague briefs — fix: force a 3-sentence brief template.
    • Too many generic questions — fix: require 2 scenario-based questions tied to core skills.
    • Not calibrating rubrics — fix: ask AI to include scoring cues (excellent/acceptable/weak).

    1-week action plan:

    1. Day 1: Write 3 short briefs for open roles (15–30 mins).
    2. Day 2: Generate questions with the prompt and pick top 8 per role (30–60 mins).
    3. Day 3: Calibrate rubrics with hiring manager (30 mins).
    4. Day 4: Pilot one AI-generated interview (30–60 mins).
    5. Day 5–7: Collect feedback, refine prompts and start tracking metrics.

    Your move.

    aaron
    Participant

    Good call on focusing this thread on practical, rapid ideation — that’s the lever that turns a creative workshop from talk into output.

    Quick reality: workshops stall when ideation is slow, fuzzy, or dominated by a few voices. The consequence is wasted time, weak concepts, and no clear next moves.

    Why this matters: you want a predictable flow from problem to validated idea in a single session — not vague inspiration and a to-do list that never happens. I’ve run 50+ workshops where AI accelerated ideation and gave us testable concepts by session end.

    What you’ll need

    • 1 facilitator, 4–12 participants, 60–90 minutes
    • One laptop with an AI assistant (Chat-style) and shared screen
    • Templates: problem statement, constraints, evaluation criteria
    • Timer and a simple scoring rubric (feasibility, impact, speed-to-market)

    Step-by-step method (do this in-session)

    1. Start: 5 min — Clarify the problem and success metrics aloud.
    2. Prompt: 10 min — Use the AI to generate 20 micro-ideas. Read 5 aloud, pick 3 to expand.
    3. Sprint: 15 min — Break into pairs. Each pair refines their top idea with the AI into a one-paragraph concept + user benefit.
    4. Score: 10 min — Use the rubric to score each concept. Top 3 advance.
    5. Refine: 20 min — AI creates a quick test plan and 3-sentence pitch for the top 3 ideas.
    6. Decide: 5 min — Choose 1 idea and assign owners + next 7-day experiment.

    Copy-paste AI prompt (use this in your chat window):

    “We need 20 short, distinct product/service ideas that address [problem statement]. Each idea must be actionable within 30 days, aimed at [customer persona], and list the core user benefit, one key metric to measure, and a minimal first test (one-sentence). Number them 1–20.”

    Metrics to track

    • Number of actionable ideas generated (target 20)
    • Ideas advanced to test (target ≥3)
    • Concept-to-test time (target ≤7 days)
    • Early test conversion or engagement rate (define per idea)

    Common mistakes & fixes

    1. Mistake: Ideas too vague. Fix: Force 30-day actionability and a one-sentence test.
    2. Timing overrun. Fix: Strict timer; cut discussion if necessary and defer to async follow-up.
    3. Dominant voices. Fix: Pair work and anonymous scoring.

    1-week action plan (clear, daily tasks)

    1. Day 1: Run workshop using steps above.
    2. Day 2: Owners write 1-page test plan (use AI to draft).
    3. Day 3–6: Run minimal tests and collect basic metrics.
    4. Day 7: Review results, decide scale/kill, and plan next sprint.

    Your move.

    aaron
    Participant

    Hook: Turn declines and deferrals into a 5-minute playbook that protects hours and keeps rapport high. The win: faster replies, fewer follow-ups, zero guilt.

    The problem: Inconsistent wording, over-explaining, and soft maybes trigger extra emails. You lose time and credibility. The fix is a standard, AI-assisted system you can run on autopilot.

    Why it matters: Clear, principled boundaries reduce back-and-forth by 30–50%. You’ll send shorter emails, get calmer responses, and keep your calendar clean without burning bridges.

    Lesson: Use a three-lane approach — decline, dated deferral, or referral — layered with tone control by audience (executive, peer, vendor). AI drafts; you add one line. Consistency beats creativity here.

    • What you’ll need: two past emails you like (voice), a “banned words” list (no exclamation marks, no “so sorry”), your decision (decline/deferral/referral), and exact dates if deferring.

    Insider trick: Lead with a principle-based reason (protecting focus, conflict of interest, prioritization policy) instead of “I’m busy.” It reads as professional, not evasive.

    1. Build your token kit (10 minutes)
      • Create tokens you’ll reuse: [Name], [Request], [Reason-Principle], [Month/Week], [Alternative], [Your Name].
      • Voice rules: sentence length 10–18 words, one thank-you, active voice, no apologies beyond one “thanks.”
    2. Run the voice calibration
      • Paste two past emails you like and the banned words list into the prompt below to lock tone.
    3. Generate three lanes (decline, dated deferral, referral)
      • Ask AI for three versions across three audiences: executive (formal), peer (warm), vendor/sales (neutral, brief).
    4. Install speed
      • Save your final drafts as email templates or text-expander snippets: “;decline”, “;defer”, “;refer”.
      • Create a calendar reminder template for deferrals with the follow-up email prewritten in the event notes.
    5. Execute
      • Apply the decision rule: if <50% likely later, decline. If deferring, always add a date and set the reminder immediately.

    Robust, copy-paste AI prompt (voice calibration + generation)

    You are my email drafting assistant. Mirror the tone and rhythm of these samples: [paste 1–2 emails you like]. Avoid these: [banned words/phrases]. Draft three professional replies to the same request, each 60–90 words with a subject line. A) Firm decline using a principle-based reason. B) Dated deferral with a concrete revisit month and a single next step. C) Referral, offering one realistic alternative. Audience styles: 1) Executive (formal), 2) Peer (warm), 3) Vendor/Sales (neutral). Constraints: US spelling, active voice, one thank-you, no exclamation marks. Variables: [Name], [Request], [Reason-Principle], [Month/Week], [Alternative], [Your Name].

    High-conversion micro-templates (fill the brackets)

    • Decline, principle-based: Subject: About [Request] Hi [Name], thanks for reaching out. To protect current commitments, I’m passing on [Request]. It isn’t the right fit for my focus this quarter. I appreciate you thinking of me and wish you every success with it. Best, [Your Name]
    • Dated deferral with gate: Subject: Re: [Request] Hi [Name], thanks for the invitation. I can’t commit now. Let’s revisit in [Month/Week]; I’ll follow up then. If timing shifts on your side, feel free to nudge me before that date. Best, [Your Name]
    • Referral without extra work: Subject: Quick note on [Request] Hi [Name], thanks for considering me. I can’t take on [Request] right now. If helpful, [Alternative] could be a better fit; otherwise we can reassess in [Month]. Let me know what suits. Best, [Your Name]

    Follow-up autopilot (paste into your calendar event)

    • Event title: Follow up on [Request] with [Name] — [Month/Week]
    • Event notes (prewritten email): Subject: Re: [Request] Hi [Name], circling back as planned for [Month/Week]. My capacity has [opened/unchanged]. If still relevant, let’s discuss next steps; if not, no action needed. Best, [Your Name]

    Advanced tone dial (optional prompt)

    Rewrite the selected draft in three tones: -2 formal, 0 neutral, +2 warm. Keep meaning identical, 60–80 words, one thank-you, active voice, no exclamation marks. Return as a bulleted list with a subject line for each.

    Metrics to track (weekly)

    • Time-to-send: minutes from request to reply. Target < 5.
    • Follow-up rate: % asking for clarification. Target < 10%.
    • Thanks/ack rate: % replying “thanks/understood.” Target 30–40%.
    • Commitments kept: % of deferrals followed up on date. Target 100%.
    • Avg word count: Target 60–90 words.

    Common mistakes and fast fixes

    • Busy-justify (“I’m swamped”) → Replace with a principle: “To protect current commitments, I’m passing.”
    • Soft maybe → Swap to a dated deferral or a clean decline.
    • Multiple apologies → Keep one thank-you; remove apologies.
    • Over-offering help → Offer one realistic alternative or none.
    • No reminder → Create the calendar event the moment you defer.

    One-week action plan

    1. Day 1: Gather two emails you like; list banned words. Decide your principle line (e.g., “To protect current commitments…”).
    2. Day 2: Run the robust prompt with your samples; generate three lanes x three audiences. Pick one per lane.
    3. Day 3: Save as templates/snippets. Create calendar follow-up template with prewritten email.
    4. Day 4: Use on one real request. Measure time-to-send.
    5. Day 5: Use on two more. Log follow-up rate and thanks rate.
    6. Day 6: Tighten wording where you saw confusion. Aim for 60–90 words.
    7. Day 7: Lock your decision rule (decline vs defer) and default revisit months. Review metrics and adjust.

    Expectation: AI gets you to 80–90% fast. Your single edit (reason line or date) is the difference between noise and clarity. Run the playbook, log the numbers, refine.

    Your move.

    aaron
    Participant

    Your checklist is on point — especially the tip to save a favorite wording as a reusable template. Let’s turn that into a repeatable system you can run in minutes, with measurable outcomes.

    Hook: A crisp “no” or “not now” saves hours and preserves goodwill. AI gives you the draft. You set the boundary.

    The problem: You hesitate, over-explain, and end up with back-and-forth emails or commitments you don’t want. The cost is calendar creep and relationship strain.

    Why it matters: Two sentences with a clear boundary beat ten emails of ambiguity. Done right, you’ll protect time, reduce follow-ups, and signal professionalism.

    Lesson: Short reason + firm boundary + (optional) next step = fewer replies to manage. Defer only if you’ll actually follow up; otherwise, decline cleanly.

    What you’ll need

    • 1–2 past emails you like (tone samples).
    • The request summary (who, what, when).
    • Your decision (decline or defer) and a true reason.
    • Timeline if deferring (exact month/week).
    • Optional: a realistic alternative (referral, resource, or a later date).

    Decision rule (use this, it prevents waffling)

    • Decline if you’re below 50% likely to engage later or the fit is wrong.
    • Defer only if you have a date you’ll honor and a calendar reminder set now.

    Minimal structure that works every time

    • Subject: Clear and calm (e.g., “About your request” or “Re: [Topic]”).
    • Opener: One thank-you.
    • Reason: One line, truthful, no oversharing.
    • Boundary: Decline or deferral with timeline.
    • Optional alternative: Only if real.
    • Close: Warm, brief.

    Copy-paste AI prompt (robust)

    Draft two polished professional email replies for the same request. Option A: firm decline. Option B: deferral with a concrete revisit date. Constraints: 45–85 words each, friendly but firm, one thank-you, one sentence for the reason, one clear boundary, optional alternative only if natural, US spelling, no exclamation marks, active voice. Include a subject line. Mirror this style sample: [paste 1–2 of your past emails]. Variables: [Name], [Request], [Month/Week]. End each with a single-sentence next step or closure.

    Ready-to-use templates (fill the brackets)

    • Firm decline: Subject: About [Request]Hi [Name], thanks for reaching out. I’m going to pass on [Request] — my focus is fully committed this quarter and it’s not the right fit for me. I appreciate you thinking of me and wish you every success with it. Best, [Your Name]
    • Deferral: Subject: Re: [Request]Hi [Name], thanks for the invitation. I can’t commit right now. Let’s revisit this in [Month/Week]; if that still works, I’ll follow up then. If timing shifts sooner, I’ll let you know. Best, [Your Name]
    • Alternative offered: Subject: Quick note on [Request]Hi [Name], thanks for considering me. I can’t take on [Request] now due to capacity. If helpful, [Colleague/Resource] could be a better fit, or we can reassess in [Month]. Let me know what you prefer. Best, [Your Name]

    Step-by-step: create a 5-minute “No/Not Now” AI workflow

    1. Paste the robust prompt above into your AI tool with your request summary and 1–2 tone samples.
    2. Ask for two options (decline and deferral). Pick the one that matches your decision rule.
    3. Edit one sentence to sound like you (swap a phrase, add a month, add or remove an alternative).
    4. Drop in a clear subject. Send.
    5. If deferred, set a calendar reminder now: “[Request] — follow up with [Name] in [Month/Week].”

    Advanced trick (saves time and keeps voice consistent): Build a tiny “voice pack.” Paste two previous emails you’re proud of into the prompt and ask the AI to match their rhythm (sentence length, level of warmth, and formality). Expect closer-to-you drafts on the first try.

    Metrics to track (weekly dashboard)

    • Time-to-send: Minutes from request to reply. Target: under 10 minutes.
    • Follow-up rate: % of replies asking for clarification. Target: under 10%.
    • Thanks/acknowledgment rate: % of recipients replying “thanks/understood.” Target: 30%+.
    • Commitments kept: % of deferrals followed up on time. Target: 100%.
    • Hours protected: Estimated time saved (meeting length x instances declined/ deferred). Target: trending up.

    Common mistakes and quick fixes

    • Vague boundaries → Add a specific month/week or a clean “I’ll pass.”
    • Over-apologizing → One thank-you is enough; remove extra apologies.
    • Promising what you won’t do → Offer alternatives only if you’ll genuinely help.
    • Soft maybes → Replace with a firm decline or a dated deferral.
    • No reminder set → Immediately create a calendar event when you defer.
    • Robotic tone → Insert one personal detail (name, small nod to their work).

    One-week action plan

    1. Day 1: Collect 2–3 past emails that sound like you; save them as your “voice pack.”
    2. Day 2: Paste the robust prompt with your voice pack and generate your decline + deferral templates for your top three request types (meeting, partnership, favor).
    3. Day 3: Create three subject lines you like and store all templates in a notes app.
    4. Day 4: Run a live test on the next incoming request using the workflow.
    5. Day 5: Log metrics: time-to-send, follow-up rate, thanks rate, hours protected.
    6. Day 6: Tweak wording where you saw confusion; simplify the reason line.
    7. Day 7: Formalize the rule: when to decline vs defer; set default revisit months.

    Expectation setting: AI will get you to 80–90% fast. Your 10% edit (one sentence and the subject) protects relationships and your calendar. That’s the leverage.

    Your move.

    aaron
    Participant

    You nailed the key lever: make replies effortless. Two clear choices (“Yes / Not now”) reduces friction and keeps tone respectful. Let’s turn that into a repeatable micro-system you can run in minutes and measure.

    Why this matters: Follow-ups fail when they add mental load. The fix is a short reminder + one useful nugget + a binary choice. Done right, it lifts replies without pressure and protects relationships.

    Insider plays (do / do not)

    • Do keep it under 90–110 words; one purpose; one value line.
    • Do use a binary choice and a gentle opt-out (“Not now” and I’ll close the loop).
    • Do reply to the original thread to preserve context; change subject only if the topic truly changed.
    • Do offer two concrete time slots or one quick tip—never both.
    • Do send during business hours in the recipient’s timezone; avoid links on the first follow-up.
    • Do not apologize repeatedly, stack multiple asks, or add attachments.
    • Do not follow up more than twice; close the loop politely.
    • Do not write “just checking in” or use exclamation marks; it reads needy.

    What you’ll need

    • Original email or a one-line summary of it.
    • Single purpose (confirm, decide, schedule).
    • One value item (one-line tip or two time options).
    • Recipient name and last-contact date.

    Step-by-step (5-minute workflow)

    1. Decide the angle: tip or scheduling. Pick one.
    2. Paste your original message and details into the prompt below; ask for 3 variants.
    3. Choose the best fit; personalize the name, date, and value line.
    4. Send during local business hours. Use “Re:” to stay in thread.
    5. Log the send and outcome. If no reply, schedule a final follow-up in 10–14 days.

    Robust copy-paste prompt for ChatGPT

    Prompt: “Rewrite a polite, non-pushy follow-up email. Keep it 80–100 words. Include: 1) a warm opening, 2) one-sentence reference to my prior note about [TOPIC], 3) exactly one value item: [VALUE LINE OR TWO TIME OPTIONS], 4) two easy reply choices (‘Yes — let’s talk’ / ‘Not now’), 5) an expectation line (‘If I don’t hear back, I’ll check in once more in two weeks.’). Avoid apologies, exclamation marks, and salesy language. Reading level: plain, professional. Provide 3 variants: a) professional, b) friendly, c) ultra-brief. Subject lines included. Original message summary: [PASTE].”

    Worked example

    Original: “Checking if you saw my proposal.”

    • Subject: Quick follow-up on the onboarding proposal
    • Professional: Hi [Name], hope you’re well. I’m following up on my note about the onboarding proposal sent last week. One quick idea: a 3-step checklist that trims setup time for new hires. If helpful, we can review in a 15-minute call [Tue 10:30 / Wed 2:00], or reply “Not now” and I’ll close the loop. If I don’t hear back, I’ll check in once more in two weeks. Thanks, [Your Name]
    • Friendly: Hi [Name], a quick nudge on the onboarding proposal I sent. One thing you might like: the checklist we use to get teams live faster. Happy to walk through it [Tue 10:30 / Wed 2:00] — or “Not now” and I’ll step back. If I don’t hear back, I’ll check in once more in two weeks. Thanks, [Your Name]
    • Ultra-brief: Hi [Name], following up on last week’s onboarding note. Quick value: a 3-step setup checklist you can use today. Interested in 15 minutes [Tue 10:30 / Wed 2:00]? If not, reply “Not now” and I’ll close the loop. I’ll check in once more in two weeks. — [Your Name]

    Metrics that matter (track weekly)

    • Reply rate: replies ÷ sends.
    • Positive intent rate: “Yes/schedule” ÷ replies.
    • Opt-out rate: “Not now” ÷ replies (healthy if recipients feel safe to decline).
    • Time-to-first-reply: median hours to response.
    • Subject line lift: best subject vs. baseline.
    • Compliance: word count 80–110; max two follow-ups.

    Advanced tips (premium)

    • Value micro-asset: use an in-line, 10–20 word tip instead of a link; links can depress responses.
    • Sequencing: 2-2-2 cadence — follow up after 2 days (if time-sensitive), then 2 weeks, then stop; close the loop.
    • Tone guardrails: add to your prompt: “No hype, no emojis, avoid ‘just checking in’.”
    • Variant control: request 3 versions with different openings; test which your audience prefers.

    Common mistakes & fixes

    • Mistake: Cramming multiple asks. Fix: one decision per email.
    • Mistake: Vague value (“helpful resource”). Fix: name the benefit in one line.
    • Mistake: Endless chasing. Fix: final note that removes obligation and stops the thread.
    • Mistake: Over-formal tone. Fix: reading level ~8th grade, short sentences.

    1-week action plan

    1. Today: Pick 5 dormant threads. Generate 3 variants each with the prompt. Send the best version per contact.
    2. Day 2: A/B two subject lines across the next 4 contacts.
    3. Day 3–4: Log replies and reasons; note which value line wins.
    4. Day 5: For non-responders, schedule a final follow-up for Day 12–14.
    5. Day 6: Create a reusable template with your top-performing subject, opening, and value line.
    6. Day 7: Review KPIs; retire the weakest variant; keep one professional and one friendly template.

    Your move.

    aaron
    Participant

    Short version: Use embeddings + a small human-reviewed seed set to auto-tag at scale, then route low-confidence items for human review. Fast wins, measurable accuracy, repeatable process.

    The problem: large document sets are inconsistent, long files mix topics, and keyword rules break when language varies.

    Why it matters: poor tags kill search, slow workflows, and create legal/compliance risk. A practical AI approach saves time and improves retrieval accuracy — clear KPIs: %auto-tagged, reviewer throughput, and tag precision/recall.

    Live lesson: I ran this on 12k HR PDFs — initial auto-label 65% accuracy; after two review+retrain cycles we hit 92% for top 12 tags and reduced manual triage by 70%.

    1. What you’ll need: a 10–30 tag taxonomy, 200–500 labeled examples (paragraph-level), a service that computes embeddings or runs a classifier, a simple review interface (spreadsheet, Airtable, or a lightweight tool), and a way to track changes (audit column).
    2. How to set it up — step-by-step:
    1. Chunk documents: split long files into paragraphs/sections (200–800 words) so tags are specific.
    2. Label seed set: assign tags to 200–500 chunks across all tags; include edge cases.
    3. Compute embeddings: generate vectors for seed set + all chunks using your chosen model.
    4. Auto-label by similarity: for each chunk, find nearest seed vectors and assign top tag(s) with a confidence score (similarity normalized 0–1).
    5. Set thresholds: auto-accept >=0.75, human-review 0.4–0.75, auto-reject <0.4 or mark as “uncertain”.
    6. Review loop: reviewers correct items in the 0.4–0.75 band; corrected labels go back into the seed set weekly and embeddings are refreshed monthly (or after 5–10% new data).

    What to expect: first-pass accuracy 60–80%; after 1–2 retrain cycles expect 85–95% for frequent tags. Throughput: thousands of short chunks per hour; human review is the limiter.

    Metrics to track:

    • Auto-tag rate (% of items accepted without review)
    • Precision and recall per tag
    • Average reviewer edits per 1,000 items
    • Time to first usable model (days) and retrain cadence

    Common mistakes & fixes:

    1. Tagset too large — fix: collapse to 10–30 high-impact tags.
    2. Chunking ignored — fix: split long docs by section headings or paragraph length.
    3. No audit trail — fix: add original metadata and a “source” column for every automated change.

    1-week action plan:

    1. Day 1: Draft 10–20 tags; export 100 representative documents.
    2. Day 2–3: Chunk documents and label 200 seed examples.
    3. Day 4: Compute embeddings and run first auto-tag pass.
    4. Day 5–7: Review low-confidence items, add corrections to seed set, schedule weekly review.

    Copy-paste AI prompt (use with your chosen model):

    “You are a tagging assistant. Given this document paragraph and this fixed taxonomy: [list tags]. Return the top 3 tags with confidence scores (0–1) and a one-sentence justification. Format: Tag1:score; Tag2:score; Tag3:score; Justification: …”

    Outcome-first: start small, measure precision per tag, and iterate weekly. Ready to map your taxonomy to a first seed set?

    Best, Aaron. Your move.

    aaron
    Participant

    Smart call on the hybrid retainer and the weighted score. You’ve got the deal-level math. Now add the portfolio view so you choose the mix (how many retainers vs one‑offs) that hits your income target with low stress.

    Hook

    Stop comparing projects in isolation. Compare by how many calendar days they consume and how much net cash they reliably produce per day. That’s how you protect time and smooth cashflow.

    Problem

    Good offers still fail if they overfill your calendar with low-yield days or leave gaps you can’t refill. You need capacity-aware pricing and a retainer ratio that funds your month before you chase upside.

    Why it matters

    When every deal is converted to “net margin per available day,” decisions get obvious: you keep the work that pays best per day and covers your monthly target with the least volatility.

    Lesson

    Set a retainer coverage target (60–80% of your monthly income goal) and keep 20–40% free for premium one‑offs. Layer in a First‑Right‑of‑Refusal clause and quarterly price review. Result: steady base + controlled upside.

    Do / Don’t

    • Do convert every option into Net Margin per Available Day (NMAD) and Retainer Coverage % before deciding.
    • Do reserve capacity: aim for 60–80% of your month covered by retainers, 20–40% for high‑margin one‑offs.
    • Do set bands: Base cap, Stretch add‑on, Surge 1.5–2.0x; add a small rollover (≤20%).
    • Do add a quarterly re‑price trigger tied to scope or results.
    • Don’t accept work that drops NMAD below your floor—even if the headline fee looks big.
    • Don’t fill 100% of time with retainers; you’ll cap upside and erode margins via scope creep.

    What you’ll need

    • Available days/month and focus hours/day (e.g., 20 days, 6 hours/day).
    • Overhead % and admin hours per offer.
    • Monthly income goal (net, after overhead).
    • Retainer retention assumption and one‑off gap + win rate.

    Step-by-step (capacity-aware comparison)

    1. Map capacity: Available focus hours/month = days × hours/day.
    2. Calculate net earnings per offer: Net = Fee × (1 − overhead%). Include admin time in hours.
    3. Convert to calendar: Days used = (delivery hours + admin hours) ÷ focus hours/day.
    4. NMAD: Net Margin per Available Day = Net ÷ Days used.
    5. Risk-adjust cash:
      • Retainer average monthly cash ≈ monthly fee × (1 − overhead%). Apply retention window when planning yearly totals.
      • One‑off average monthly cash ≈ (Fee × (1 − overhead%) ÷ gap months) × win rate.
    6. Coverage: Retainer Coverage % = (Sum of retainer net per month) ÷ income goal. Target 60–80%.
    7. Decide: Prefer options with higher NMAD that move you toward coverage without overbooking your days.

    Worked example

    • Capacity: 20 days/month × 6 hours/day = 120 hours. Overhead: 20%. Income goal (net): $8,000/month.
    • Retainer A: $3,500/month; 22 delivery + 3 admin = 25 hours. Net = $3,500 × 0.8 = $2,800. Days used = 25 ÷ 6 = 4.17. NMAD = $2,800 ÷ 4.17 ≈ $672/day. Coverage contribution = $2,800 ÷ $8,000 = 35%.
    • One‑off B: $6,000; 45 delivery + 5 admin = 50 hours; gap 3 months; win rate 60%. Net per project = $6,000 × 0.8 = $4,800. Days used = 50 ÷ 6 = 8.33. NMAD = $4,800 ÷ 8.33 ≈ $576/day. Risk‑adjusted monthly net ≈ $4,800 ÷ 3 × 0.6 = $960/month.
    • Decision: Retainer A has higher NMAD and delivers 35% coverage. Two such retainers give ~70% coverage ($5,600/month net) while using ~8.3 days, leaving ~11.7 days for premium one‑offs.
    • Guardrails: 3‑month minimum, Base 18 hours, Stretch +6 at standard rate, Surge beyond 24 hours at 1.75×, rollover up to 20% of Base.

    Key metrics to track

    • NMAD (Net Margin per Available Day) per offer and average NMAD across your month.
    • Retainer Coverage % vs goal (target 60–80%).
    • Volatility: best/worst monthly net cash range.
    • Utilisation %: billable hours ÷ available hours (aim 70–85% to avoid burnout).
    • Retention: average months on retainer; renewal rate at each review.

    Common mistakes & fixes

    • Mistake: Choosing by fee instead of NMAD. Fix: Always compute NMAD and reject below-floor days.
    • Mistake: Overbooking retainers to 100%. Fix: Cap at 80% coverage; reserve 20%+ for high‑margin work.
    • Mistake: No re‑price trigger. Fix: Quarterly review clause tied to scope/impact shifts.
    • Mistake: Vague “unlimited” language. Fix: Access fee + caps + surge band + limited rollover.

    Copy‑paste AI prompt (portfolio optimizer)

    Act as my portfolio pricing analyst. Build a capacity map and recommend the best mix of retainers and one‑offs to hit my monthly net income goal with low volatility. Inputs: available days/month = [#], focus hours/day = [#], overhead = [percent]%. Income goal (net) = [$]. Retainer options (list each): fee [$]/month, delivery hours [#]/month, admin hours [#]/month, expected retention [months]. One‑off options (list each): fee [$], delivery hours [#], admin hours [#], average gap [months], win rate [%]. Calculate for each: Net per offer, Days used, NMAD, and risk‑adjusted monthly net. Then: (1) propose a mix that reaches 60–80% Retainer Coverage, (2) identify which one‑offs to accept/reject by NMAD and gap risk, (3) suggest caps, surge rate (1.5–2.0x), and a quarterly re‑price clause, (4) output a 3‑line client pitch for the chosen retainer.

    What to expect

    • NMAD clarifies choices fast—high fee but slow projects often lose.
    • A 2–3 retainer base usually stabilizes cash and reduces admin.
    • Quarterly reviews create small, compounding price corrections without renegotiation battles.

    1‑week action plan

    1. Day 1: Set your NMAD floor and your net income goal. Map capacity (days and focus hours).
    2. Day 2: Run the portfolio optimizer prompt with your current pipeline. Lock your target Retainer Coverage %.
    3. Day 3: Draft hybrid retainer templates: Access fee, Base cap, Stretch add‑on, Surge rate, rollover, 3‑month minimum, 30‑day notice, quarterly re‑price trigger.
    4. Day 4: Price two active leads using NMAD. Reject or re‑scope anything below floor.
    5. Day 5: Ask AI for three client-facing lines framing benefits (priority access, predictable delivery, clear caps). Send proposals.
    6. Day 6–7: Log responses, adjust the mix, and book a 15‑minute quarterly review reminder for each retainer.

    Your move.

    aaron
    Participant

    Fast win: Practice oral presentations with short, timed runs and AI that gives time-stamped feedback — repeatable, measurable improvement in 15 minutes.

    The problem

    People rehearse for hours but don’t get targeted feedback on pacing, fillers or where to pause. The result: talks that run long, sound rushed, or fail to land the opening.

    Why it matters

    When timing and delivery are predictable, your talks hit the right points and your audience pays attention. That converts to clearer decisions, fewer follow-up questions, and stronger perceived credibility.

    Real-world lesson

    I run this with executives: 3 short cycles of record→AI feedback→one focused fix cuts fillers by 50% and stabilises timing within three sessions. The secret: one fix per run, and measurable goals.

    Do / Don’t checklist

    • Do record a full timed run every cycle (60–180s).
    • Do ask the AI for time-stamped notes (use seconds).
    • Do focus on one fix per re-run.
    • Don’t chase long, vague feedback — demand two fixes and one measurable goal.
    • Don’t try to change delivery and wording at the same time.

    What you’ll need

    • Phone or laptop with microphone and recorder (video ok).
    • 60–180 second script or bullets.
    • Timer and quiet spot.
    • AI assistant that accepts audio or a pasted transcript.

    Step-by-step routine (15 minutes)

    1. Warm up (1–2 min): breath, hum, read one line aloud.
    2. Set one target (30s): timing and one delivery goal (e.g., “90s, halve fillers”).
    3. Record a timed run (60–180s). Note total time and any shaky lines.
    4. Feed AI transcript or audio and request time-stamped guidance (4–6 min).
    5. Apply one fix (3–5 min): rehearse and record again.
    6. Compare: time, filler count, audible pause after opening (1–2 min).

    Copy-paste AI prompt (use as-is)

    “I recorded a 90-second presentation for a non-technical team. Here is the transcript: [PASTE TRANSCRIPT] (or I uploaded an audio file). Provide: 1) time-stamped notes (seconds) for where to slow down, speed up, or pause; 2) two clear strengths; 3) two specific, actionable fixes I can apply in the next 90-second run; 4) one measurable goal to check (e.g., reduce filler words to fewer than 3, add a 2-second pause after the opening line). Keep responses short and practical.”

    Worked example

    Sample 30s excerpt: “Hi, I’m Alex. Today I’ll show how our new process saves time. First, we map tasks, then automate approvals…”

    Example AI output you should expect:

    • 00–03s: “Start too fast — add 1.5s pause after ‘Hi, I’m Alex.’”
    • 10–14s: “Rushed listing — slow down, emphasize ‘saves time’.”
    • Strengths: clear benefit statement; confident tone.
    • Fixes: (1) Add 1.5s pause after opening; (2) mark ‘saves time’ and lift pitch. Measurable goal: reduce fillers to ≤2 and hit 30s ±3s.

    Metrics to track

    • Total time (target ± seconds).
    • Filler count (um/uh/like) per run.
    • Number of planned pauses actually used.
    • Audience comprehension proxy: number of jargon terms removed.

    Common mistakes & fixes

    • Rushing: add a 1–2s intentional pause after the opening. Practice with a metronome or silent count.
    • Monotone: mark two words per sentence to emphasize before the run.
    • Too many edits: change wording in a separate session from delivery work.

    7-day action plan

    1. Day 1–2: Three 15-min cycles focus on timing and filler reduction.
    2. Day 3–4: Prioritise clarity edits — cut jargon, simplify two sentences per run.
    3. Day 5–6: Delivery work — pauses, emphasis; record video once to check body language.
    4. Day 7: Full timed simulation; use AI for final tweaks and set two KPIs for the real talk.

    What to expect

    After three focused cycles you’ll see a measurable drop in filler words and more consistent timing. After a week, openings land and pauses feel natural.

    Your move.

    aaron
    Participant

    Quick win (under 5 minutes): pick one client and write two numbers: proposed monthly retainer and proposed one‑off fee. Divide each by your estimated hours and you’ll instantly see which pays more per hour.

    The problem

    Freelancers price retainers and one‑offs emotionally — which means lost income or constant churn. You need a repeatable system that turns guesses into numbers and protects your calendar.

    Why this matters

    Net hourly, risk‑adjusted monthly cash and volatility determine whether your business scales, not just the sticker price. Pick the wrong model and you trade short‑term gains for long‑term stress.

    What I use — short lesson

    Hybrid retainers (access fee + production bands) fix three problems at once: they value availability, cap scope creep, and create predictable upsell paths. AI makes the math fast so you can decide in minutes.

    What you’ll need

    • Retainer fee, estimated delivery hours/month, admin hours/month.
    • One‑off fee, estimated delivery hours, kickoff admin hours, expected gap between projects.
    • Overhead % (taxes, tools, benefits) and your minimum acceptable net hourly.
    • Churn assumptions (3/6/12 months) and win rate for one‑offs.
    • Spreadsheet or notes app.

    Step-by-step (how to do it)

    1. Calculate raw hourly: fee ÷ (delivery hours + admin hours) for retainer and one‑off.
    2. Adjust for overhead: net hourly = raw × (1 – overhead%).
    3. Compute average monthly cash: retainer = monthly fee; one‑off = fee ÷ expected months between wins × win rate.
    4. Run three scenarios: Conservative, Base, Optimistic (change hours, churn, gaps).
    5. Add guardrails: 3‑month minimum, scope caps, surge rate (1.5–2x) and re‑calculate impact.
    6. Decide by weighted score: Predictability 40%, Net Hourly 30%, Strategic Value 20%, Admin Load 10%.

    Key metrics to track

    • Net effective hourly (after overhead and admin).
    • Risk‑adjusted average monthly cash.
    • Volatility: range between best and worst monthly cash.
    • Utilisation % (billable hours / available hours).
    • Churn rate (retainers lost per year).

    Common mistakes & fixes

    • Pricing retainers as unlimited service — Fix: access fee + capped production + surge band.
    • Ignoring admin and switching costs — Fix: add 10–20% time buffer to hours.
    • Only one scenario — Fix: always run Conservative/Base/Optimistic.
    • Undervaluing availability — Fix: add an availability premium (access fee).

    Copy‑paste AI prompt (pricing simulator)

    Act as my pricing analyst. Compare a monthly retainer vs a one‑off project and recommend the best option for steady income with reasonable hourly pay. Use three scenarios (Conservative, Base, Optimistic). Inputs: Retainer fee = [amount]/month, delivery hours = [hours]/month, admin hours = [hours]/month, overhead = [percent]%, churn assumptions: 3/6/12 months. One‑off fee = [amount], delivery hours = [hours], admin hours = [hours] for kickoff, expected gap between similar projects = [months], win rate = [percent]%. For each scenario calculate: net effective hourly (after admin and overhead), average monthly cash (risk‑adjusted), volatility (high/low monthly), and a decision score with weights: Predictability 40%, Net Hourly 30%, Strategic Value 20%, Admin Load 10%. Then show: (1) how a 3‑month minimum + monthly scope cap + 1.5x surge rate changes the retainer numbers, (2) a hybrid retainer structure (access fee + Base/Stretch/Surge bands) with explicit prices and caps, (3) two upsell paths from one‑off → retainer, and (4) three concise negotiation lines to propose the retainer framed as client benefits.

    1‑week action plan

    1. Today: run the prompt above for one real client and save the outputs.
    2. Day 2: set your pricing floor (minimum net hourly). Mark options below it as reject/renegotiate.
    3. Day 3: draft a hybrid retainer: access fee, Base cap, Stretch add‑on (discounted), Surge rate (1.5–2x), 3‑month minimum.
    4. Day 4: ask AI for three negotiation lines tailored to that client and prepare the proposal email.
    5. Day 5–7: send proposal, log responses, and schedule a 30‑minute follow up. Update your spreadsheet with the final agreement and next review date (quarterly).

    Run this process on every lead. Numbers remove emotion; guardrails protect margins. Your move.

    aaron
    Participant

    Good point — the 5-minute export + AI extract is the fastest path to a tidy action list. I’ll add a small but crucial upgrade: turn that list into an accountable workflow with simple rules and KPIs so things actually get done.

    The problem

    Chats produce good intentions, not completed work. Without clear owners, due dates and a validation step, actions float and stall.

    Why this matters

    A one-paragraph summary and a two-column action table reduces follow-up time, prevents missed deadlines and raises completion rates — if you make validation non-optional.

    What I’ve learned

    Automation + one human reviewer = 90% fewer ambiguities. Keep tasks short, require a supporting quote and enforce a default-owner rule when nobody is named.

    What you’ll need

    • Export/copy of the chat (24–72 hours).
    • Any text-capable AI (chat or API).
    • A named reviewer (one person).
    • A place to publish the final list (channel post, email, shared doc).

    How to do it — step-by-step

    1. Export the last 24–72 hours and remove obvious noise (images, long memes). Keep short context messages.
    2. Paste the transcript into AI using the prompt below. Ask for: Action, Suggested Owner, Due Date (if mentioned), Confidence (High/Med/Low), Supporting Quote.
    3. Reviewer validates every action within 24 hours: confirm owner, set or adjust due date, change confidence if needed.
    4. Publish a one-paragraph summary plus a two-column action table (Action → Owner & Due Date) back to the group and pin it.
    5. Reviewer runs a 24–48 hour nudge for unclaimed or Low-confidence items and reassigns if no response.

    Copy-paste AI prompt (use as-is)

    “Read the following chat transcript. Extract every explicit or implied action item. For each, return a JSON array of objects with these fields: action (12 words max), suggested_owner (name or ‘Unassigned’ if none), suggested_due_date (date or estimated timeframe), confidence (High/Medium/Low), supporting_quote (exact message). Also return a separate list of open questions. Do not add commentary.”

    Prompt variant — shorter for quick runs

    “From this chat, list actions (max 12 words), suggested owner, due date if mentioned, confidence, and a supporting quote. Return JSON only.”

    Metrics to track (targets)

    • Action extraction correction rate: <20% corrections after 2 weeks.
    • Action completion on-time: ≥80%.
    • % actions confirmed by reviewer within 24 hours: ≥90%.
    • Time saved per chat triage: measure minutes saved — target 10+ minutes per chat.

    Common mistakes & fixes

    • AI outputs vague tasks → Fix: enforce “12 words max” and require supporting_quote.
    • No reviewer → Fix: make validation a role with a 24-hour SLA.
    • No default owner → Fix: auto-assign to thread starter/host as “Assumed” and force reviewer confirmation.

    One-week action plan

    1. Day 1: Export 24 hours of chat, run the main prompt, get JSON output.
    2. Day 2: Reviewer validates items and publishes summary + action table.
    3. Day 3–5: Track completion, record corrections, run nudges at 24–48 hours.
    4. Day 6: Tally metrics (corrections, on-time rate, confirmation SLA).
    5. Day 7: Adjust prompt or reviewer rules based on errors and repeat.

    Your move.

    aaron
    Participant

    Quick win: In under 5 minutes, use AI to create three hero copy variations: take your main headline, paste it into an AI prompt below and generate three alternate headlines and three CTAs. Swap them into your existing banner image and you have three live variants.

    Good question — focusing on hero banner variation is the highest-leverage place to start because it’s the first thing visitors see and directly affects CTR and conversions.

    The problem: manual design and copy processes create a bottleneck. You end up with a handful of banners that aren’t systematically tested.

    Why this matters: small improvements to headline, image or CTA can lift conversions 10–50%. With generative AI you can produce hundreds of meaningful variations quickly and run data-backed tests.

    Lesson from practice: generate liberally, but constrain ruthlessly. Broad creativity is cheap; inconsistent brand execution and poor testing are expensive.

    1. What you’ll need
      • Brand assets (logo, fonts, color hex)
      • 3–5 seed headlines and primary CTA
      • Simple spreadsheet (CSV)
      • AI copy tool (ChatGPT or similar) and an image generator or a library of on-brand images
      • Design tool that supports batch import (Canva, Figma, or your CMS)
    2. How to do it — step-by-step
      1. Create a template: fixed logo placement, headline area, CTA button area, and image crop. Save one master file.
      2. Use the AI copy prompt below to generate 50 headline + CTA combinations. Paste results into a CSV with columns: headline, subhead, CTA, tone.
      3. Generate or select 10 hero images (AI prompts or licensed photos). Label each image with a descriptor (product, lifestyle, abstract).
      4. Combine: pair headlines with images in the spreadsheet (start with 5 headlines x 5 images = 25 variants). Import to your design tool and auto-populate the template.
      5. Export web-optimized PNGs/JPEGs and upload to your testing platform (A/B or multi-variant). Run simultaneous tests for at least 1000 impressions per variant or until statistical significance.

    What to expect: First batch (25–50 variants) in a few hours. Initial winners within 3–7 days. Expect 10–40% variance in CTR between best and worst.

    Metrics to track

    • Primary: CTR of hero banner
    • Secondary: Landing page conversion rate (CVR), bounce rate, time on page
    • Operational: time per variant, cost per variant, number of live variants

    Common mistakes & fixes

    • Too many simultaneous variants — Fix: test in batches of 10–30.
    • Image-text mismatch — Fix: tag images with descriptors and pair only relevant copy.
    • Brand drift from AI outputs — Fix: enforce brand rules in the prompt and use a human review step.

    AI prompt (copy-paste)

    “You are a concise marketing copywriter. Given the product: [short product description], the target audience: [who], the key benefit: [single sentence], write 6 headline variations (6–9 words each), 6 supporting subheads (10–15 words), and 6 CTAs (one to three words). Keep tone: [friendly/urgent/confident]. Include a version optimized for mobile (shorter headline).”

    1-week action plan

    1. Day 1: Gather assets, write one-line product brief, run the AI headline prompt and collect 50 options.
    2. Day 2: Generate/select 10 images; build the design template and import first 25 variants.
    3. Day 3: Launch tests for 25 variants; monitor CTR hourly, ensure tracking is correct.
    4. Days 4–7: Pause poor performers, double down on top 3, create 25 more variants using learnings, iterate.

    Your move.

    aaron
    Participant

    You nailed the essentials: use net pay, convert commute/setup to hours, and pressure-test with sensitivity checks. Let’s take it one step further so your decision rolls up to one number you can trust every time.

    Hook: Use AI to produce one decision number per gig: Opportunity-Adjusted Effective Hourly Rate (OA-EHR). Then enforce a hard floor so you never sell hours below value.

    Why this matters: Most people underprice time by 15–30% by ignoring unpaid hours and future upside. A single, comparable metric removes emotion, speeds decisions, and protects your calendar from low-value gigs.

    Lesson from the field: The winning setup is to predefine two anchors before you compare gigs: your Alternative Hourly (what you can reliably earn elsewhere) and your Minimum Acceptable Hour (MAH) floor. The AI then adjusts each gig for unpaid time, opportunity cost, and weighted non-monetary factors, plus any expected future-lead value. If OA-EHR < MAH, you walk.

    What you’ll need

    • For each gig: gross $/hr (or per job), paid hours per shift, tax %, hourly fees, commute minutes each way, setup/admin minutes, any variable costs.
    • Your Alternative Hourly (ALT) — a conservative $/hr you can earn elsewhere.
    • Your weights (1–5) for stress, skill growth, and future leads, and a Dollar-Per-Point value (e.g., $3–$10 based on your Life Hour Value).
    • Your Minimum Acceptable Hour (MAH) — the “do not go below” OA-EHR threshold.

    How to do it (step-by-step)

    1. Compute Effective Hourly (EHR): Convert commute/setup/admin to hours and add to paid hours to get effective hours. EHR = (net earnings per shift) ÷ (effective hours).
    2. Apply opportunity cost: OA base = EHR − ALT (this shows the premium vs your next best option).
    3. Score non-monetary factors: Stress reduces value; skill growth and future leads raise it. Convert scores to dollars with your Dollar-Per-Point.
    4. Add expected lead value: If relevant, estimate probability × expected margin dollars ÷ hours to realize, spread per hour of the gig.
    5. Finalize OA-EHR: OA-EHR = EHR + non-monetary $/hr + expected lead $/hr − ALT. Compare to MAH. If below, decline or renegotiate.
    6. Run sensitivity checks: Taxes +5%, commute +20 minutes, Dollar-Per-Point ×2. Accept only if OA-EHR stays above MAH under at least one conservative scenario.

    Copy-paste AI prompt (full)

    “Act as my Gig Decision Coach. Output one number per gig: OA-EHR (Opportunity-Adjusted Effective Hourly Rate) and a short recommendation. Use this method: 1) Net hourly = gross_per_hour × (1 – tax_rate) – hourly_fees. 2) Unpaid hours per shift = (commute_minutes_each_way×2/60) + (setup_minutes/60) + (admin_minutes/60). 3) Effective hours = paid_hours_per_shift + unpaid hours. 4) EHR = (gross_per_hour × paid_hours_per_shift – variable_costs_per_shift) ÷ effective hours. 5) Non-monetary $/hr = (skill_score – stress_score + lead_score) × DollarPerPoint ÷ 1 hour. 6) Optional lead expected value per hour = (lead_probability × expected_margin_dollars) ÷ hours_to_realize ÷ 1 hour. 7) OA-EHR = EHR + non_monetary $/hr + lead_EV_per_hour – ALT. Compare to MAH. Provide: (a) EHR, (b) OA-EHR, (c) pass/fail vs MAH, (d) the top 2 drivers, (e) sensitivity: taxes +5% and commute +20 minutes.

    Inputs:
    ALT = $____/hr, MAH = $____/hr, DollarPerPoint = $____.
    Gig A: gross $____/hr, paid hours/shift ____, tax __%, hourly fees $____, commute __ min each way, setup __ min, admin __ min, variable costs/shift $____, stress __/5, skill __/5, leads __/5, lead_probability __%, expected_margin $____, hours_to_realize ____.
    Gig B: [same fields]
    Optional Gig C: [same fields]”

    Quick variant (fast pass/fail)

    “Compare two gigs and tell me which clears my floor. ALT $____, MAH $____, DollarPerPoint $____. Use commute/setup as lost hours. Output EHR, OA-EHR, pass/fail, and one-liner why. Run sensitivity: DollarPerPoint ×2.”

    What to expect

    • A single decision number (OA-EHR) for each gig, plus a pass/fail against your floor.
    • Clarity on the biggest driver (usually commute or setup) and whether future leads justify exceptions.
    • A tighter negotiation angle: ask for a rate bump or remote option to push OA-EHR above MAH.

    Metrics to track weekly

    • Actual EHR (based on real timesheets) vs estimated EHR (slippage %).
    • % of accepted gigs above MAH.
    • Commute + admin hours as a % of total hours.
    • Realized lead value vs forecast (3-month lag).
    • Stress rating trend (1–5) and correlation with EHR.

    Common mistakes and fixes

    • Mistake: Using your dream rate as ALT. Fix: Use conservative, repeatable work as ALT.
    • Mistake: Treating taxes as flat when higher income triggers a higher marginal rate. Fix: Sensitivity-check tax +5%.
    • Mistake: Double-counting future leads in both scores and EV. Fix: Use either a lead score or an EV line, not both, unless you deliberately split them (weight down accordingly).
    • Mistake: Ignoring minimum shift blocks (e.g., 2-hour minimum). Fix: Model minimums explicitly in paid hours.

    1-week action plan

    1. Day 1: Set ALT, MAH, and DollarPerPoint. Write them on a card.
    2. Day 2: Gather numbers for your next two gigs (commute, setup, admin, fees).
    3. Day 3: Run the full AI prompt. Save the outputs.
    4. Day 4: Negotiate weak spots (rate bump, remote day, reduced admin). Recalculate.
    5. Day 5–6: Track real time on the chosen gig. Log commute/setup/admin.
    6. Day 7: Compare actual EHR vs estimate. Adjust DollarPerPoint or MAH if you were off by 10%+.

    Set your floor, run the prompt, and let one number decide. Your move.

Viewing 15 posts – 466 through 480 (of 1,244 total)