17
24 Comments

Everyone is Using AI for Vibe Coding, but What You Really Need is Vibe UX

As someone with a background in both Software Development (but with poor coding skills) and UX Research (but with poor UI skills), AI has been a game-changer.

Much like everyone and their mother, I can launch a front-end application in a matter of minutes, and delegate all the "hard" work to the latest AI model so that I never have to touch a line of code, without having to design a single interface, page, or button.

But when it comes to creating your own product, is this really the "hard" work?

As a UX professional who has spent years exclusively in the research phase, a good 90% of my job consisted of (politely) telling other people that their ideas were sht and the other 10% trying to prove why their ideas were sht.

You can create the most beautiful, efficient, well-coded app in the world, but if the initial idea is sh*t, or if you are trying to solve the wrong problem, or if you are building with the wrong audience in mind - nobody is going to use it, and all your "hard" work will be for nothing.

There are dozens, maybe hundreds, of "vibe coding" tools targeted at wannabe entrepreneurs out there that market themselves as "turn your idea into a working prototype in 10 seconds"... the issue? Very few people have good ideas and even fewer decide to test them BEFORE they start creating anything.

My absolute favourite task as a UX researcher was being asked to do research for things that did not exist yet. Something that does not have a product nor a prototype, at best a series of post-its on a wallboard. Now THAT's how you design a succesful product.

Step 1: Pick a problem OR a target audience.
Step 2: Decide which problem you want to solve for which target audience.
Step 3: Research research research.
Prototyping doesn't come after AT LEAST step 4.

Usually, if you don't have access to your target audience, this research fails. But in the service design world, we have a workaround: we sit at a table, brainstorm, and co-create. We talk to experts who KNOW the audience. We exchange and challenge and improve our own ideas so that we don't just create something random.

So now that I am "indie hacking" on my own, and that AI does all my coding, all my front-end, all my UI. What am I to do?

Instead of just asking AI to build for you, ask it to design WITH you. Have it act as the expert who know is (allegedly) smarter than you, and (allegedly) knows your target audience better than you, and ask it to tell you why YOUR ideas are sh*t.

More importantly - ask AI to actually provide you with the customer segments, personas, business canvases, problem statements, and product requirements; have it write all the documentation and create your kanban boards and todos. Have it do all of that, and then you can just feed all of those documents back to it when it's time to actually build something useful.

I cannot think of a good term for this delegation of the service design process. But it's what comes before "vibe coding" and what will make you stand out from 99.99% of the other amazingly-undervalidated-idea-turned-shitty-web-apps out there.

on January 25, 2026
  1. 3

    Agree with the comment below about distribution being one of the biggest "bug" to solve

  2. 2

    The "problem → audience → research → prototype" order you laid out is where most projects silently die. People skip research because it feels slow, then spend weeks building something that validates the wrong assumption.

    What I've found useful: treating AI less as a builder and more as a pre-mortem partner. Before writing any code, I'll prompt it with "play devil's advocate — why would someone not use this?" The objections it generates often expose gaps that wouldn't surface until after launch.

    The distribution point others raised is real too. Building is now the easy part. The hard part is earning attention in a market where everyone can ship fast. UX research upfront helps you figure out not just what to build, but how to position it so people actually care.

    Curious if you've found AI useful for competitive analysis in this "pre-vibe" phase — understanding what's already out there and where the real gaps are.

    1. 1

      100% playing devil's advocate is the best way to approach this. And yes it works wonders for benchmarking and competitors' analysis. I would say it's the second-best thing after having access to the actual target users ;)

  3. 2

    Great perspective, this really flips the usual “vibe coding hype” on its head. So many people jump straight into AI tools to build prototypes without asking the hard questions first — what problem are we solving, for whom, and why would they choose this product at all? The point about research before building really resonates, because even if AI can generate UI and code fast, that alone doesn’t guarantee anyone will use what you build. Focusing on UX, audience validation, and real human needs before generating prototypes with AI seems like a smarter path to meaningful products.

  4. 2

    This is so true. Vibe coding is getting scary good, but it also makes it easier than ever to ship something nobody asked for. The real bottleneck was never “can you build it,” it was always “are you solving the right problem for the right people?”

    The “research before prototypes” point is the cheat code. Once you build, you get emotionally attached and start defending decisions instead of validating them.

    Healthcare is the perfect example. You can build “AI home health software” fast, but if you haven’t talked to agency owners, admins, and field clinicians, you’ll miss the real pain. They don’t want AI for the sake of AI. They want fewer missed handoffs, cleaner documentation, smoother scheduling, and less chaos during compliance pressure.

    AI can generate code, screens, and flows. The edge is using it to sharpen the problem, test assumptions, and design UX that actually gets adopted.

  5. 2

    I think that what's more important than what products you create using Vibe Coding is how to make those products visible. And for most people, this is a very difficult hurdle to overcome.

  6. 2

    This resonates. "Vibe coding" gets all the hype, but the real leverage is in what you're describing — using AI as a sparring partner before you write a single line.

    I've noticed the same gap building a tech news aggregator. The temptation is to jump straight into features, but AI is actually more valuable when I use it to challenge my assumptions: "Who exactly needs this? What problem does this actually solve? What would make them stop using it?"

    The step order you laid out (problem → audience → research → THEN prototype) is exactly where most people skip. They go idea → prototype → "why isn't anyone using this?"

    Curious: when you use AI to critique your own ideas, do you find it more useful to prompt for specific failure modes ("how could this fail?") or general critique ("what's wrong with this?")?

    1. 3

      I often ask it to do "worst case scenarios" and propose its own risk mitigation strategy. It also helps to give the same prompt or design challenge but ask it to impersonate different people - e.g. a UX expert vs a marketing expert.

      1. 2

        The persona approach is clever — getting a "UX expert" and "marketing expert" to critique the same idea surfaces very different blind spots. One focuses on friction, the other on positioning.

        I've been experimenting with something similar: asking AI to argue against my feature ideas from the perspective of a skeptical user who's seen 10 similar tools fail. The objections it generates are often more useful than generic "what's wrong with this" prompts.

        The "worst case + mitigation" combo is solid too. Forces you to think about failure modes before you're emotionally invested in the solution.

        Thanks for sharing the workflow.

      2. 1

        Oooo such a good tip!

  7. 1

    The "problem → audience → research → prototype" order you laid out is where most projects silently die. People skip research because it feels slow, then spend weeks building something that validates the wrong assumption.

    The technical execution has never been cheaper. The idea selection is still as hard as it's always been. That gap is exactly what I'm working on. Find more about me @the_vibepreneur

  8. 1

    This hits different as someone doing weekend vibe coding sessions.

    Built a recipe app with Claude Code recently — the code took hours, but figuring out the right UX (no signup, instant swiping, AI learning your taste) took weeks of thinking.

    Your point about asking AI to challenge your ideas is underrated. I now start projects by having Claude poke holes in my assumptions before writing a single line.

  9. 1

    The "90% of my job was politely telling people their ideas were shit" line hit home. I've seen so many founders jump straight to building because AI makes it easy now, but you're just speedrunning toward the wrong solution. Curious though, when you're using AI as that expert on your audience, how do you validate it's not just hallucinating convincing BS?

    1. 1

      Personally I have a process which is more of a "brainstorming" session involving myself and two different AI models. I give them the same design challenge, ask both for feedback, then have them cross-check their answers as well as my own feedback, and start again. Sometimes two-three times until they find a "compromise".

  10. 1

    This nails something I have been experiencing firsthand. I set up an AI assistant to help run my businesses and the most valuable output has not been code - it has been the strategic pushback.

    I literally had to program rules into its personality file like "challenge bad ideas" and "no new features until we have 10 paying customers" because by default it just agrees with everything. Once I did that, it became the UX researcher/devil's advocate I could never afford to hire.

    Best example: I was about to build a bunch of new features for my SaaS. The AI pulled up competitor analysis, showed me my product was already 95% complete for MVP, and basically said "you are avoiding sales by hiding behind code." It was right.

    The pre-vibe work you describe - personas, problem statements, product requirements - is exactly what I now have it generate before any building happens. The irony is most people use AI to build faster when they should use it to think better first.

  11. 1

    This is so true — AI makes building fast, but it doesn’t fix building the wrong thing. Designing the problem before the product is the real edge.

  12. 1

    Great point. AI can definitely speed up development, but without proper research and planning, even the best-built product can fail. Validating ideas, understanding users, and structuring content first always leads to better results than jumping straight into building.

    I follow the same strategy when organizing content projects like my breakfast menu guides, where planning and clarity come before execution breakfastmenuorg. It saves a lot of rework later.

  13. 1

    "90% of my job was telling people their ideas were sh*t" — honestly the most valuable skill in product. Everyone's in love with their solution, nobody wants to hear the problem doesn't exist.

    I've been building some dev tools lately and the hardest part isn't the code, it's resisting the urge to add features nobody asked for. The tools I actually use daily look nothing like my original "vision" because real usage exposes what matters vs what sounds cool.

    The pre-mortem approach someone mentioned above is gold. Way easier to kill a bad idea on paper than after you've spent 3 weeks on it.

  14. 1

    We built an open-source AI orchestration tool after struggling with multi-agent workflows

    Over the last few months, while working with AI tools in real projects, we kept running into the same limitation:

    Most AI assistants work well for single prompts, but once tasks become multi-step or project-level, things start breaking down — context loss, inconsistent outputs, and no clear way to understand why something happened.

    We initially tried stitching things together with prompts and scripts, but it quickly became fragile.

    So we built AutomatosX to solve this internally.

    The idea wasn’t to build another chat interface, but to focus on orchestration — planning tasks, routing work through the right agents, cross-checking outputs, and making everything observable and repeatable.

    What AutomatosX currently focuses on:

    Specialized agents (full-stack, backend, security, DevOps, etc.) with task-specific behavior

    Reusable workflows for things like code review, debugging, implementation, and testing

    Multi-model discussions, where multiple models (Claude, Gemini, Codex, Grok) reason together and produce a synthesized result

    Governance & traceability, including execution traces, guard checks, and auditability

    Persistent context, so work doesn’t reset every session

    A local dashboard to monitor runs, providers, and outcomes

    One thing we learned quickly is that orchestration matters more than prompting once AI is used for real development work. Reliability, explainability, and repeatability become far more important than raw model capability.

    AutomatosX is open-source and still evolving. If anyone is curious, the repo is on github:
    /defai-digital/AutomatosX

    I’d really appreciate feedback from others who are building or using agent-based systems:

    How are you coordinating agents today?

    What’s been the hardest part to make reliable?

  15. 1

    Finding customers is a challenge.

  16. 1

    This really resonates. ~
    AI made building cheap, but it didn’t make deciding what to build any easier.

    I’ve watched a few projects die because we treated UX as something you “add later,” when it’s really the thing that decides whether the product deserves to exist at all. You can ship fast and still ship the wrong thing.

    The mental model I’ve been using lately:

    (Vibe coding answers “Can this exist?”, UX answers “Should this exist?”, Distribution answers “Does anyone care?”)

    Most people jump straight to the first question because it feels productive.

    I also like your point about research before prototypes. Even a couple of honest conversations with the right audience beats weeks of building. I’ve had ideas collapse in 20 minutes of talking — which is a win, honestly.

    Curious how you’re using AI for this part in practice.
    Do you treat it more like a sparring partner to challenge assumptions, or as a way to synthesize messy inputs after talking to real people?

    Feels like this “pre-vibe” work is where most indie products quietly succeed or fail.

  17. 1

    How do you keep up with your AI agents? You give one line of command to AI, and it follows up with 1 page of things to do, with "yes" and "no"?

    What if we see AI agents slowly start talking to you? Would that be beneficial ?

    Some people are stuck in an AI text box, and just don't know we can do better.

    1. 1

      I always ask my AIs for actual deliverables - customer segment documentations, personas, and kanban boards with ACTIONABLE recommendations that one can clearly tick off once completed

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 142 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments A simple way to keep AI automations from making bad decisions User Avatar 40 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 40 comments I spent weeks building a food decision tool instead of something useful User Avatar 28 comments