I agree - “fixes” are a cool opportunity. I’ve created a runtime analysis of code execution that spots certain types of flaws and anti-patterns (that static analyzers cant find). Then I’m using a combination of the execution trace, code, and finding metadata (eg OWASP URL) to create a prompt. The AI responds with a detailed description of the problem (in the style of a PR comment), and a suggested fix. Here’s a short video of it in action - lmk what you think.
It looks quite powerful. I would focus on adoption and usability over adding any more features. I feel like there's a lot of value there already, but I'm not exactly sure how I'd integrate it into my workflow.
The CI integration sounds like the most interesting part to me, since I usually let things fail in CI then go back and fix them.
It's kind of in an interesting spot because it's not instant feedback like a linter/type checker, but only running it in CI feels like a waste of potential.
Thanks for the advice! I agree with your characterization as somewhere between “instant” and “too late” (eg in prod) feedback. We are focusing on the code editor and GitHub Actions at the moment. For example, figuring out out what happens after the GitHub Action identifies a problem. Do you try and fix it directly in the browser? Or go back to the code editor to inspect the issue and work on the AI-assisted fix? Fixing “in browser” feels awkward to me, but I have seen some videos of Copilot X doing this so maybe it’s possible? Working with the code back in the code editor is of course much more powerful, but it takes some work to setup the context locally to work on the fix. Wdyt?
https://www.loom.com/share/969562d3c0fd49518d0f64aecbddccd6?...