How Vibe Coding and Agentic AI Are Changing Software Development and Security

Vibe Coding at Heroku with Vish Abrams

AI tools are changing the way developers write software. While it’s hard to say exactly how much code is AI-generated today, estimates suggest it’s somewhere between 20% and 40%—and that number is only going up. This shift is creating a new kind of development experience where developers act more like directors, guiding AI to write and refine code instead of crafting every line themselves.

This style of working, recently dubbed “vibe coding” by Andrej Karpathy, transforms the programmer’s role. Instead of diving into low-level code all the time, developers now steer AI-generated solutions—offering direction, reviewing output, and making critical decisions. It’s a collaborative process that combines human creativity with AI’s speed and efficiency.

Vish Abrams, Chief Architect at Heroku and former engineer at Oracle and NASA, joined Kevin Ball (a.k.a. KBall) on the podcast for a deep dive into this emerging coding paradigm. They discussed the evolving role of AI in software development, the limits and potential of vibe coding, differences between solo and team AI tools, the Model Context Protocol (MCP), and Heroku’s new managed inference service.

KBall is also the VP of Engineering at Mento, an engineering leadership coach, and the founder of the San Diego JavaScript meetup and the “AI Inaction” discussion group via Latent Space.


Endor Labs Tackles Vibe Coding Risks with AI Agents

As AI-generated code becomes more common, so do the security risks that come with it. Endor Labs is responding to this challenge by upgrading its application security (AppSec) platform with intelligent AI agents built to detect, prioritize, and even fix code vulnerabilities automatically.

With one of the industry’s most comprehensive security datasets and a deep focus on agentic AI, Endor Labs goes beyond simple alerting. Their platform is built to keep up with the speed and scale of modern development—where large volumes of code are generated with minimal human oversight.

Endor Labs co-founder and CEO Varun Badhwar sums it up:

“We’re in the middle of a software development revolution. While most code used to come from open source, we’re quickly moving to a future where 80% is generated by AI. That future is arriving faster than many realize.”

Unlike many tools that are simply wrappers around language models, Endor Labs claims their strength lies in the quality and depth of their data. Over three years, their team has:

  • Analyzed 4.5 million open source projects and AI models

  • Mapped 150+ risk factors to software components

  • Built detailed call graphs indexing billions of functions and libraries

  • Precisely annotated known vulnerable lines of code

This groundwork powers their new agentic AI features that integrate seamlessly into the development lifecycle, helping security teams take immediate, informed action.


Agentic AI in Action

The enhanced platform introduces security-focused AI agents designed to think and act like human developers or security experts. These agents evaluate code changes, identify potential threats, and propose accurate, context-aware fixes—all without slowing down the development process.

The first major capability: AI Security Code Review.
This feature assigns multiple AI agents to inspect every pull request (PR), focusing on changes often missed by traditional static analysis tools, such as:

  • Introducing prompt injection risks in AI systems

  • Modifying authentication or authorization logic

  • Creating new public-facing APIs

  • Altering cryptographic functions

  • Handling sensitive data improperly

Key benefits include reducing noise through smarter prioritization, uncovering hidden critical risks, and letting security engineers focus on what matters most—without getting in the way of fast-paced vibe coding.

Mark Breitenbach, a security engineer at Dropbox, notes:

“We’re looking for scalable ways to detect business logic flaws and unknown issues that traditional tools miss. This kind of AI-powered insight is a game changer.”


Meta-Code Protocol (MCP) Plugin for Cursor

To help developers code more securely in real time, Endor Labs has introduced a Meta-Code Protocol (MCP) plugin for AI-native environments like Cursor. This tool integrates directly with AI assistants like GitHub Copilot and monitors code as it’s written—flagging risks and suggesting fixes instantly.

The goal? Collapse what used to be a slow, manual security process—filled with tickets and long review cycles—into a real-time, in-editor experience. Fix issues before the pull request is even submitted.

Chris Steffen, VP of Research at Enterprise Management Associates, adds:

“Security teams need tools that deliver both visibility and action. Endor Labs is leading the way by pairing AI innovation with real-world security expertise.”


The Future of Secure Vibe Coding

Endor Labs is positioning itself as a key player in this next era of software development—one where AI writes more code than humans, and security needs to evolve to keep up. Their platform aims to help organizations address whole categories of risks before they become production problems.

Leave a Comment