newsletter open
Join our growing list of product knowledge seekers, and get our insights on all things product.
You're now subscribed to our newsletter!
close
Something went wrong, please try again later.
Back to Articles

Design Sprint Day Three Recap – Prioritize, Debate, and Drift from the Vibe

What We Did Today

[This is an ongoing blog series, start your journey here.] In this session, the team continued defining core MVP functionality, but more importantly, we observed how much of the workflow leaned on—or drifted away from—Vibe Design and Vibe Code methods.

Rather than defaulting to LLM-driven prompts for interface structure or UX logic, most decisions were hashed out via real-time conversation and whiteboard thinking. There was minimal active use of conversational prompting to define new flows, screen layouts, or architectural scaffolds.

The vibe was there. The Vibe Design wasn’t.

Not because it doesn’t work—but because the team didn’t reach for it during this stage. That’s critical insight: unless structured deliberately around conversational workflows, cross-functional teams may default back to traditional tools, even during an AI-first sprint.

Today, we focused on:

  • Defining MVP scope using team discussion and past Figma references
  • Weighing architectural trade-offs like embedded maps vs. external links
  • Surfacing onboarding and personalization flows manually—without live LLM prompting

Roles, tools & working styles

Developers

Working Style: Traditional architecture-first discussion

While Vibe Code was not directly used, devs weighed backend constraints and costs (e.g., Google Maps SDKs). Code generation via LLM wasn’t tested in-session—most technical evaluation was instinctual and experiential.

Designers

Working Style: Hybrid—leaning traditional

Designers referenced prior outputs and sketches, some of which originated from earlier Vibe Design experiments (e.g., onboarding flows from Claude). However, no new flows were generated live via LLM during the session.

Explorers / Generalists

Working Style: Conceptual modelling via discussion

This group contributed ideas for onboarding, personalization, and feed customization. Tools like Claude had been used before to bootstrap categories and layout ideas, but the group reverted to organic collaboration instead of prompting in the moment.

Key insights & “aha” moments

  • Vibe Design requires intentional scaffolding. Without cues or rituals prompting its use, teams drift back to default behaviors (talking, sketching, pointing).
  • Conversational workflows are underutilized in prioritization phases. They’re powerful for early ideation—but may be seen as redundant once the team starts “figuring things out together.”
  • AI needs a seat at the table, not just the whiteboard. Most decisions happened in Zoom, not in prompts. Without co-piloting the conversation itself, Vibe methods stayed passive.

Selected AI Tools and Approaches to Explore

  • Claude (earlier use for category modelling and onboarding flow generation)
  • Figma Make (used for prior screens, not actively during this session)
  • Vibe Design (absent during this workshop, though earlier outputs were referenced)
  • Vibe Code (not used—architecture discussion was verbal and practical)

Goal: To assess whether conversational workflows can stay active across the full product lifecycle—or if they only shine during initial ideation.

Hypotheses we’re testing today

What we’re really learning about conversational workflows

  • Vibe Design is powerful—but fragile without structured use
  • Teams revert to traditional collaboration unless AI prompting is embedded in rituals
  • Vibe Code workflows require live coding tasks or flow generation moments to activate
  • LLMs are great co-pilots—but they’re not in the cockpit unless invited

Risks & unknowns

If Vibe Design fails, here’s why it might

  • Without daily scaffolding, the LLM stays silent—leaving humans to fall back on old habits
  • Prior outputs from Claude or Stitch risk becoming artifacts instead of assets
  • Developers still prefer to discuss feasibility rather than prompt it into code
  • Lack of real-time prompting limits learning cycles and undermines tool validatio

Key decisions made

Mostly manual. Here’s what that tells us.

  • No shopper login required (based on conversation, not prototype testing)
  • Linking to external map apps instead of embedding (based on cost, not generated comparisons)
  • Shopper profiles will collect minimal info (decided manually, could have been simulated via LLM)

Takeaways for builders

Especially those experimenting with LLM-based workflows

  • Vibe Design needs a prompt protocol. Without intentional cues, it fades into the background
  • AI is a tool. Conversation is a method. Rituals are the glue. You need all three
  • If AI doesn’t shape decisions, you’re not testing it—you’re just remembering it
  • Designers and devs need better LLM handoffs. If early flows aren’t actively referenced or evolved, they don’t compound in value

What’s next

Shifting from talking about AI to actually using it—again

Coming up:

  • Prompting Claude and Stitch live to generate real onboarding flows
  • Testing if Vibe Code can scaffold map-based UIs using open-source alternatives
  • Embedding daily “Vibe Check” prompts into sessions to track usage and resistance

Documentation plans:

Each participant will now reflect specifically on how/when they use conversational design/code workflows—and why they may not. This will be logged alongside outputs for better traceability.

Final reminder: This isn’t about perfection—it’s about proof. We’re building fast, learning publicly, and asking: What can AI actually deliver when it’s part of the team—not just a tool?

Link copied to clipboard