What We Did Today
[This is an ongoing blog series, start your journey here.] Today’s session was about translating ideas into action—and stress-testing our core hypothesis: Can vibe design and vibe coding tools accelerate the journey from concept to code without compromising quality, clarity, or collaboration?
To find out, we zoomed in on product definition. Using Miro as our collaborative workspace, we conducted a live prioritization workshop to sort every shopper- and farmer-facing feature into “must have,” “should have,” and “could have” tiers. This forced a shared understanding of scope, value, and feasibility—and helped us pinpoint exactly what we expect these AI tools to deliver in the coming prototype phase.
The session layered on top of AI-generated mocks from earlier in the week. This gave us a rare chance to see how well early AI-generated UX aligns with actual product strategy—and whether those tools can carry their weight beyond the idea phase.
We also pressure-tested how data from tools like Miro can translate into structured assets (e.g., PRDs, dev tickets, screen logic) that tools like Figma Make or Stitch can interpret or build from. By the end of the session, every feature was tagged, grouped, and prepped for AI-assisted PRD generation and prototype creation—setting us up to see just how fast and accurately we can go from feature list to shippable code.
Today, we focused on:
- Shopper experience priorities like search, navigation, feeds, and notifications
- Farmer experience flows from onboarding to inventory management and visibility
- Tagging and categorizing every feature for export and integration into Notion
- Deciding how and where AI fits into the workflow—from feature extraction to prototype generation to PRD formatting

Roles, Tools & Working Styles
Developers
Toolset: Claude Code, Stitch, Notion
Approach: Developers are leaning toward tools that support clean, maintainable code exports—especially where AI tooling doesn’t add overhead. Code-based outputs are preferred over visual gimmicks.
Designers
Toolset: Figma Make, Miro, Figma AI
Approach: Visuals first. Designers are generating mockups directly from prioritized features and experimenting with AI-powered handoff tools. Emphasis on fast iteration over pixel perfection.
Explorers / Generalists
Toolset: Miro, Notion, ChatGPT
Approach: Focused on sorting ideas into tagged systems, testing AI tool performance (Miro → PRD → Figma), and planning side-by-side comparisons between manual vs. AI-generated product requirements.
Key Insights & “Aha” Moments
What surprised us, shifted our thinking, or needs to be flagged.
- Figma Make is surprisingly strong: AI-generated onboarding flows and personas grouped by screens gave a real sense of polish—and speed.
- Manual > Miro AI (for now): Manually tagging features into must/should/could have in Miro outperformed AI in both fidelity and clarity.
- Trust Google Reviews: Instead of building a moderation system for comments, the team leaned into using Google reviews as a credible, external trust signal.
- Tool convergence is real: Everyone noticed how similarly these new AI tools behave—many generate screens, few close the loop on code or collaboration.
Selected AI Tools and Approaches to Explore
- Figma Make for structured screen generation from feature sets
- Stitch by Google for early mockups and UX ideation
- Miro for collaborative prioritization, then CSV export into Notion
- Claude Code for testing code output from screen-level designs
- ChatGPT for PRD generation directly from screenshots or lists
Goal: To test the end-to-end workflow—can we go from feature prioritization → AI-generated PRD → screen mocks → usable code, without burning too much time or trust?
Quotes from the Team
What was said—and why it mattered
“I would have beat AI in a contest.” — Priscilla
Context: After manually extracting and tagging all features from previous mocks, confirming that AI’s not quite ready to auto-prioritize cleanly.
“We used to spend so much time making these screens.” — Priscilla
Context: While viewing the auto-generated Figma Make output, reflecting on how fast onboarding flows were mocked up compared to traditional workflows.
“Let’s give it a shot… and the answer is, it didn’t—or it did. Either one is the right answer.” — JP
Context: Framing the sprint mindset around experimentation, not perfection. If the tool fails, that’s still progress.
Hypotheses We’re Testing Today
- Product Requirement Documents (PRDs) generated from feature sets will match (or exceed) manual documentation
- Can Figma Make can output developer-ready mockups with minimal rework
- Will tools that start with code (like Stitch) support full design iteration
Risks & Unknowns
- Google Stitch and Figma Make may not support true design handoff workflows
- PRD tooling inconsistency—multiple formats could lead to fragmentation
Key Decisions Made
- Agreed to generate dual PRDs—one via ChatGPT, one via Miro export—for comparison
- Will use Figma Make for next screen builds and dev handoff experiments
- “Must-have” features now fully tagged and separated by user type (shopper vs farmer)
Takeaways for Builders
- Choose AI tools that generate code or collaboration, not just pretty pictures.
- Parallel workflows—manual + AI—help you triangulate what’s working.
- PRDs still matter. And the format you choose determines what AI can do next.
What’s Next
Coming up:
- Prototype build phase begins — focused on mock generation using Figma Make
- Maggie and Priscilla will synthesize must-have flows into screens
- Cary will test code handoff and integration into Notion or Jira
- Two PRDs will be compared—Miro export vs ChatGPT-generated—to evaluate clarity and compatibility
- Ongoing journal entries, screen recordings, and side-by-side tool tests to be documented for sprint transparency
Final reminder: This isn’t about perfection—it’s about proof. We’re building fast, learning publicly, and asking: What can AI actually deliver when it’s part of the team—not just a tool?