Vibe Coding for Developers: The Good, the Bad, and the Surprisingly Useful
Think of vibe coding (building apps with Generative AI) like having a brilliant but sometimes forgetful coding partner. You know the type: incredibly smart, can solve complex problems, but occasionally needs you to repeat yourself or correct their approach. That's exactly what AI coding assistants are like.
After 25+ years in tech, I thought I'd seen it all. From early chatbots to chess-playing computers, AI progress had been incremental at best. So when our DevRel (Developer Relations) team tried "vibe coding" during an offsite in early 2024, I was skeptical to say the least.
But that first experience changed everything. The AI surprised me with how well it could handle complex tasks, and I realized something profound: "This is going to be the future."
Roll on to now, where the technology has improved, and I've spent months working with AI coding assistants like Cursor IDE and Claude Code. So let me share what I've learned about the different ways you can work with AI, the real-world applications that actually deliver value, and the honest limitations that still exist.
The Developer's Spectrum: Different Ways to Work with AI
What's fascinating is that everyone on our Developer Relations (DevRel) team approaches AI collaboration differently. There's no one "right" way to do this, and that's perfectly okay. Here's just a sample of ways you can vibe code:
My methodical approach: I treat AI like a very junior developer who needs extreme guidance. I'll start by saying, "Use this API and incrementally implement the feature with this architecture." Or I'll enter a conversation and say, "Hey, here's the problem. How would you go about solving this?" — in shouting block capitals, "DO NOT WRITE ANY CODE." Then it gives me a step-by-step plan, I question the plan, correct it, and only then do I say, "Yeah, go ahead and implement this plan."
The document-based approach: Other team members keep the entire context of the application within a series of documents. Their conversation with the AI is a higher-level back and forth, agreeing an end system state, making incremental adjustments and updating the system design docs as they go, until the code actually behaves as documented.
The enhancement approach: Some developers use AI to fill specific skill gaps. For example, I'm terrible at UI design, but AI is very good at it. Our design team is even better, but they can't help me every time I need to write an application. AI has filled that gap in my skill set.
As you can see, it really comes down to what your personal preference and optimal workflow looks like.
Filling the Gaps: How AI Complements Your Skills
This is where AI really shines. Every developer has weaknesses, and AI can help bridge those gaps.
Take my UI design weakness for example. Instead of spending hours on UI or adding to the bandwidth of my teammates, I can describe what I want, and AI can generate decent-looking interfaces. It's not perfect, but it's better than what I could create from scratch.
Test data generation is another great example. This process is not a core part of coding, but something that can take considerable time. I needed some test users with androgynous names because I didn't want to have logic for gender-specific avatars. AI generated that list in seconds, saving me from hunting down obscure name databases.
Complex state management is another area where AI has surprised me. I had a voting feature that required intricate React state handling. I told the AI exactly what I wanted, and it implemented it correctly on the first try.
For junior developers, AI can be even more valuable. It can explain concepts, suggest best practices, and help debug issues a junior developer might struggle with alone.
The Art of Prompt Engineering: Quality Input = Quality Output
The most impactful lesson I've encountered from AI is that the more information and detail you provide, the better the output is.
Speaking from my own personal experience, a risk in using AI for development is that there's a propensity to be lazy when working with these virtual assistants. When I started using these tools, I was instantly impressed with their coding capabilities and speed of generation. I thought I could get by just providing high-level prompts since so much of the output could be created with simplicity, but that’s not the right approach.
If you actually spend the time planning the initial definition of what you're trying to do before involving AI - even if it's just sketching out your app on paper for a clear vision before getting started - the results are much more effective. Being as descriptive as possible makes a massive difference for your output.
Thus, for me, if there's one thing you take away from this blog post, it should be to give as much detail as possible upfront to get a stronger output in the end.
Powering AI Agents with MCP Servers
One of the most exciting developments in vibe coding is the rise of Model Context Protocol (MCP) servers. These are like giving your AI coding assistant a direct line to powerful tools and APIs.
Think of MCP servers as translators between your AI agent and the tools you need. Instead of your AI having to search the web for outdated documentation, it can directly access live APIs, real-time data, and verified code snippets.
With MCP servers, I've found that integrating new features becomes dramatically faster and more reliable. Instead of constantly switching between browser tabs and my code editor to hunt down documentation or code snippets, my AI assistant could pull exactly what I needed straight into my environment with a single command.
For instance, the PubNub MCP Server makes it incredibly easy to incorporate features like real-time chat into your applications, you can ask for "real-time chat with presence" and get runnable code in seconds. This cuts down on the usual context switching that often slows your processes down.
The flow is simple: you ask the AI to set up the MCP server, and from there you can send a request for docs or an API action, and get exactly what you need back. It's like having a knowledgeable colleague who always has the right information at their fingertips.
Real-World Applications: Where AI Actually Delivered
The effects of quality AI output can be seen across all kinds of use cases, but I'll share a recent example to illustrate. Our sales team requested a demo they could show to potential PubNub customers creating call center applications, or adding real-time chat to an existing customer communication platform. This is a great use case for PubNub as we can show user presence, as well as real-time messaging and state management.
Ideally, you would have a dedicated demo for every vertical, individually tailored to the prospect - this can be a tall ask in a pinch, but is something AI can help enable seamlessly.
With the help of AI coding agents, I put together our sales demo in just a few days. The demo supports user online / offline presence, real-time messaging including multiple choice questions and surveys, and I was even able to work in an AI chatbot to handle customer queries when the agent is offline.
Great strides are made every day in the ability of LLMs to connect to other data sources and systems. The PubNub MCP Server is a great example of this, and I found it incredibly useful help when building an application in this scenario.
The Reality Check: What Still Doesn't Work
Let's be honest about what doesn't work yet. I've learned this lesson the hard way through trial and error, so hopefully learnings from my experiences can help you in your journey with AI.
Debugging complexity is probably the biggest challenge with vibe coding. AI can struggle to debug its own issues and when it goes wrong, it can go down a completely wrong path to work its triage. I have to put effort into understanding the source code of what it's doing, what it's outputting, and sometimes I just have to start from scratch with a better defined prompt.
Then there are the false limitations. AI sometimes claims things aren't possible when they are. A member of my team had a humorous back and forth recently about what the PubNub Access Manager is capable of and the system got very confused. It was frustrating at the time, but gave us important insights to improve our PubNub MCP server so that you won’t have those same issues. Sometimes it makes sense to do your own research – and never be afraid to question the AI.
Tool instability is another reality we've had to face. We're constantly adapting to changing capabilities and tool availability. I started with Copilot, then moved to Cursor, and now we're switching to Claude Code. You can't get attached to specific tools because they change so rapidly, and each has its own advantages depending on the project at hand.
And finally, while AI can do some impressive things, there will always be a need for human oversight. You must always review and validate AI-generated code. AI can make mistakes, and you need to catch them before they cause problems. This is a non-negotiable worth learning early, before issues arise.
Team Evolution: How We Learned Together
Our DevRel team's journey with AI hasn't been perfectly smooth or without bumps along the way, but we've figured it out with an openness to learn and regular touch points to share our takeaways. Every challenge has been valuable, and our journey exemplifies how organizations can adapt to and grow with AI tools.
We started with structured learning. We took a dedicated sprint to understand the tools available to us and create workflows for ourselves. We also came up with feedback for others: "Hey, this worked for me. Why don't you try XYZ?" It was like having a study group where everyone was learning at the same time.
We continue to have a weekly call within the DevRel team in which we originally sought to discuss the best policies and practices that we've come up with. Over time, it's turned into more of an applied practice. Our team started sharing exciting features they created with AI or asking for interesting findings from projects. It helped us shift our approach to AI from the theoretical to practical application.
Knowledge sharing became crucial over time, and we've created feedback loops for team learning to share what works and what doesn't. It's like having a collective brain where everyone's experiences contribute to the group's knowledge base.
Key Takeaways for Technical Teams
Based on my experience, these are the final tekaways I highly suggest to anyone looking for guidance or starting out in the vibe coding space —
Start with low expectations: Let AI surprise you rather than disappoint you. I still have fairly low expectations, so it always manages to surprise me even now.
Be adaptable: Don't get attached to specific tools. They change rapidly. You can't be set in your ways with a particular LLM.
Use AI to complement your weaknesses: Don't try to replace your strengths, but use AI to fill gaps in your skill set.
Always maintain human oversight: Review and validate AI-generated code. You're still accountable for the final product.
Invest time in clear, detailed prompts: The effort you put into prompts directly correlates with output quality.
Wrapping Up
My role has evolved from the pure coder to what I call an "AI conductor" - orchestrating AI tools while maintaining technical judgment and oversight. It's more about understanding what to ask, how to guide, and when to intervene, rather than writing every line of code manually.
The importance of technical judgment alongside AI assistance cannot be overstated. AI is a powerful tool, but it's not a replacement for human expertise. You need both to be successful in today's world (and the future development landscape).
Vibe coding is here to stay, despite current limitations. The technology is improving rapidly, and the benefits are real. But it requires a thoughtful approach, good prompt engineering, and human oversight for quality impact in the day-to-day.
For teams just starting their AI journey, my advice is simple: keep an open mind, start small, and be patient. The learning curve is real, but the rewards are worth it.
The future of development isn't about choosing between human judgment and AI assistance. It's about combining them effectively to create better software, faster. Remember - you're not being replaced by AI; you're being augmented by it.
Ready to start your vibe coding journey? Start with one small project and see how AI can help you build something amazing.