Tuesday, February 27, 2024
Show HN: Leaping – Open-source debugging with LLMs https://ift.tt/bIPc28l
Show HN: Leaping – Open-source debugging with LLMs Show HN: Leaping - Open-source debugging with LLMs Hi HN! We’re Adrien and Kanav. We met at our previous job, where we spent about a third of our life combating a constant firehose of bugs. In the hope of reducing this pain for others in the future, we’re working on automating debugging. We started by capturing information from running applications to then ‘replay’ relevant sessions later. Our approach for Python involved extensive monkey patching: we’d use OpenTelemetry-style instrumentation to hook into the request/response lifecycle, and capture anything non-deterministic (random, time, database/third-party API calls, etc.). We would then run your code again, mocking out the non-determinism with the captured values from production, which would let you fix production bugs with the local debugger experience. You might recognize this as a variant of omniscient debugging. We think it was a nifty idea, but we couldn’t get past the performance overhead/security concerns. Approaching the problem differently, we thought - could we not just grab a stack trace and sort of “figure it out” from there? Whether that’s possible in the general case is up for debate – but we think that eventually, yes. The argument goes as follows: developers can solve bugs not because they are particularly clever or experienced (though it helps), but rather because they are willing to spend enough time coming up with increasingly informed hypotheses (“was the variable set incorrectly inside of this function?”) that they can test out in tight feedback loops (“let me print out the variable before and after the function call”). We wondered: with the proper context and guidance, why couldn’t an LLM do the same? Over the last few weeks, we’ve been working on an approach that emulates the failing test approach to debugging, where you first reproduce the error in a failing test, then fix the source code, and finally run the test again to make sure it passes. Concretely, we take a stack trace, and start by simply re-running the function that failed. We then report the result back to the LLM, add relevant source code to the context window (with Tree-sitter and LSP), and prompt the AI for a code change that will get us closer to reproducing the bug. We apply those changes, re-run the script, and keep looping until we get the same bug as the original stack trace. Then the LLM formulates a root cause, generates a fix, we run the code again - and if the bug goes away, we call it a day. We’re also looking into letting the LLM interact with a pdb shell, as well as implementing RAG for better context fetching. One thing that excites us about generating a functioning test case with a step-by-step explanation for the fix is that results are somewhat grounded in reality, making hallucinations/confabulations less likely. Here’s a 50 second demo of how this approach fares on a (perhaps contrived) error: https://ift.tt/2fRKJXY We’re working on releasing a self-hosted Python version in the next few weeks on our GitHub repo: https://ift.tt/RDrp8VB (right now it’s just the demo source code). This is just the first step towards a larger goal, so we’d love to hear any and all feedback/questions, or feel free to shoot me an email at adrien@leaping.io! February 27, 2024 at 10:59PM
Labels:
Hacker News
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment