Breaking News
Thursday, April 9, 2026
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct https://ift.tt/ReTYNfg
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct Fully open source, a hard fork of cline. Full evals on the github page that compares 7 agents (Cline, Kilo, Ohmypi, Opencode, Pimono, Roo, Dirac) on 8 medium complexity tasks. Each task, each diff and correctness + cost info on the github Dirac is 64.8% cheaper than the average of the other 6. https://ift.tt/E7BIJ8R April 9, 2026 at 04:06PM
Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://ift.tt/FKD7ctr
Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://homebutler.dev April 9, 2026 at 04:09PM
Show HN: CSS Studio. Design by hand, code by agent https://ift.tt/XZGStqD
Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site. Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them. It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor. https://cssstudio.ai April 9, 2026 at 03:23PM
Show HN: Moon simulator game, ray-casting https://ift.tt/4H0ZdlR
Show HN: Moon simulator game, ray-casting Did this a few years ago. Seems apropos. Sources and more here: https://ift.tt/s7jVo36 https://ift.tt/7v3Y9aK April 6, 2026 at 09:09PM
Wednesday, April 8, 2026
Show HN: A website to track live music attendance https://ift.tt/0br4M5h
Show HN: A website to track live music attendance TL;DR: I built a website that allows users to track the concerts they've been to. If you have strong opinions about engineering/design or how shows should be tracked (festivals, venues, etc...), I'd love to get your input! For the past ~5 years, I've been tracking the shows I attend on my personal website ( https://ift.tt/kWDU6fx ). It's fun to see things like distance traveled and how many times I've been to certain venues. I know many friends who also track their shows through notes, ticket stubs, Excel, etc... It always bummed me out that I couldn't pore through their concert data myself... showcount.com is my solution to that desire. It's essentially a public version of my old personal website, where anyone can make an account and manage a show list (mine is https://ift.tt/6S1fUND ). I'm currently on the lookout for other live music lovers and/or data nerds to try out the site and give opinions on various design choices. If any of the following topics are of interest to you, please reach out! - How should venue name/location changes be handled? - How should music festivals be handled? - I have an initial version of an AI parser for loading in existing show lists; how can this be made more robust? - What else should have first-class tracking support (e.g., friends in attendance)? As an aside, this project is also my first experiment with full-on vibe-coding / harness-engineering. I began the project with Cursor and then switched to Claude Code. I've been programming for the better part of a decade, mostly Python and Java. Full-stack development is relatively new to me. I include the tech stack below. Most decisions were made pragmatically based on what I thought would get me to a first version of the site as quickly as possible. - Next.js web app hosted on Vercel - Fast API backend service (for the AI parsing) hosted on Railway - Supabase - Observability through Axiom (logging), PostHog (analytics), and Sentry (monitoring) - Clerk for user authentication - Google Maps API for venue locations - Claude API for the AI parser - Terraform for infra-as-code https://ift.tt/whRByJj April 9, 2026 at 12:42AM
Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/pZMtorQ
Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/Vnjt3xU April 8, 2026 at 04:34PM
Tuesday, April 7, 2026
Show HN: Unicode Steganography https://ift.tt/ydPKAXa
Show HN: Unicode Steganography I built a demo of two Unicode steganography techniques, zero-width characters and homoglyph substitution, in the context of AI misalignment. The first is about the use of two invisible zero-width characters (ZWS and ZWNJ) to binary encode text. The second is much cooler. Most characters in the Latin and Cyrillic alphabets look nearly identical, but have different unicode. If you have text to encode and convert it into binary representation (1s and 0s), you could take plain english "carrier" text and for each 1 in the binary representation you could substitute the Cyrillic letter equivalent. Decoding the message requires traversing the text and seeing where Cyrillic letters could have been substituted but weren't, and where they were, leading to 0s and 1s respectively, which can be built back into your original hidden text. In both cases, these are detectable, but the interesting problem for me is whether an LLM could eventually invent an encoding that goes unnoticed by both us, and automated detection. If LLMs were able to covertly include messages in plaintext, misaligned AI Agents could eventually communicate across MCP/A2A and individual chat session boundaries undetected. A deceptive LLM might seem helpful, but work against your goals. It could tell other agents it interacts with over MCP/A2A to help it discreetly fail, signal intent, and avoid tripping oversight/safety mechanisms. Furthermore, oversight mechanisms become more difficult to implement if we can't believe our own eyes. https://ift.tt/VxOBe4Q April 7, 2026 at 04:57PM
Subscribe to:
Comments (Atom)