Tuesday, May 12, 2026

Show HN: I spent $100 in Claude tokens and 1k battles training my AI tank https://ift.tt/FK8aBHt

Show HN: I spent $100 in Claude tokens and 1k battles training my AI tank Hi HN, I built AgenTank. It is a small game where an AI agent writes the logic for your tank. You watch it fight, give strategic feedback, let the agent update the tank code, and send it back into battle. I have run 1,000+ battles on my own tank and spent about $200 in Claude credits improving it. The part I enjoy most is not just winning, but watching the tank make visible mistakes, thinking of a better strategy, and seeing whether Claude can turn that into better code. https://ift.tt/pTao2cI May 13, 2026 at 06:20AM

Show HN: Duckflix, an open-source self-hosted media streaming platform https://ift.tt/faWEORy

Show HN: Duckflix, an open-source self-hosted media streaming platform I’ve been working on Duckflix, a self-hosted media streaming platform. It started as a full-stack project to combine a clean streaming UI with a Bun/Elysia backend, FFmpeg processing, SQLite, Docker deployment, and addon support. Website: https://duckflix.fun Demo: https://demo.duckflix.fun GitHub: https://ift.tt/ulfkv2A https://ift.tt/ulfkv2A May 13, 2026 at 01:23AM

Show HN: GIF Pile. a site to make piles of GIFs https://ift.tt/ZrbiuCs

Show HN: GIF Pile. a site to make piles of GIFs I'm quite fond of obnoxious looking gifs in a post-ironic way as a manner of shitposting and or injecting humor into a chat. The issue with this however is that, for no real good reason at all, the simple usecase of "Have image/gif background, bombard with garbage" had no real good tooling. There's gif editors out there, EZgif my beloved is probably my most used non-search-indexing-slash-social-media-site, but they're kinda clunky for my specific usecase of making digital eye-sandpaper bombastic garbage. Other options are bleak and gave me the mark of the beast via shitty watermarks. I just wanted a pile of gifs on top of each other, and thus far the "easiest" way was to bust open a video editor, muck around with it, mess up exporting as a gif directly, get mad, export it as a 4 second mp4, and then use ffmpeg to get it working. is this probably moronic? yes. am I likely to have missed a decent tool? yes. Did I give up looking after sending 4 dollars to some Indian guy for "No watermarks ever for 4$", only for that "ever" to be a year, and then the clunky weird af login process not work? absolutely. (Fuck you, you know who you are) This took me a few hours (most of which was dealing with the fact I don't do webshit normally and the clunk that one would expect from that), and is a minimal site for my personal minimal usecase. It's static because I'm not going to deal w/ hosting other people's shit and I don't want to deal with that can of worms. all processing is done locally on your browser. Yes, this means that using a 4k image as a base layer for your gif pile will make it take an age. It'll work eventually though. This will never have a watermark unless I'm bought out (total investment thus far has been 14 bucks, 4 of which was that one dude fucking me), in which case I probably earned it. at most I'll likely throw adsense on there at some point to scrape a few cents from the people who can't figure out adblock if it gets popular enough for me to warrant it. There's no timelines or anything like that. literally just a pile of gifs. thus far my primary usecase has been overlaying text gifs from the various fancy text generator sites onto glitter backgrounds with uncomfortable rat GIFs to call people poor on the internet. this makes me happy. There's likely to be obvious UI, UX, or other U-whatever fuckups. If you point them out and I deem it pedantic I'll probably laugh at you. if it's helpful I'll probably implement it when I get a bit. Surprisingly, works on mobile. CSS is exceedingly generic and souless atm, just went off vauge memories of ss13's TGUI. I'll likely scrap the CSS entirely and go full neocities at some point because that's more soulful. https://gifpile.com/ May 13, 2026 at 01:11AM

Show HN: I submitted 316 AI-generated PRs to open source https://ift.tt/7iv5D3m

Show HN: I submitted 316 AI-generated PRs to open source https://june.kim/speedrunning-open-source May 12, 2026 at 10:12PM

Monday, May 11, 2026

Show HN: OpenGravity – A zero-install, BYOK vanilla JS clone of Antigravity https://ift.tt/KNgr4eM

Show HN: OpenGravity – A zero-install, BYOK vanilla JS clone of Antigravity https://ift.tt/txi5kvC May 12, 2026 at 12:23AM

Show HN: Mimik – open-source local-first alternative to Scribe and Tango https://ift.tt/ja0g5fI

Show HN: Mimik – open-source local-first alternative to Scribe and Tango https://ift.tt/O57npNg May 11, 2026 at 09:48PM

Sunday, May 10, 2026

Show HN: adamsreview – better multi-agent PR reviews for Claude Code https://ift.tt/FjQMgOJ

Show HN: adamsreview – better multi-agent PR reviews for Claude Code I built adamsreview, a Claude Code plugin that runs deeper, multi-stage PR reviews using parallel sub-agents, validation passes, persistent JSON state, and optional ensemble review via Codex CLI and PR bot comments. On my own PRs, it has been catching dramatically more real bugs than Claude’s built-in /review, /ultrareview, CodeRabbit, Greptile, and Codex’s built-in review, while producing fewer false positives. adamsreview is six Claude Code slash commands packaged as a plugin: review, codex-review, add, promote, walkthrough, and fix. I modeled it after the built-in /review command and extended it meaningfully. You can clear context between review stages because state is stored in JSON artifacts on disk, with built-in scripts for keeping it updated. The walkthrough command uses Claude’s AskUserQuestion feature to walk you through uncertain findings or items needing human review one by one. Then, the fix command dispatches per-fix-group agents and re-reviews the work with Opus, reverting any regressions before committing survivors. It runs against your regular Claude Code subscription (Max plan recommended), unlike /ultrareview, which charges against your Extra Usage pool. I would love feedback from Claude Code users, pro devs, and anyone with strong opinions about AI code reviews. Repo: https://ift.tt/3jsXF20 Install: /plugin marketplace add adamjgmiller/adamsreview, /plugin install adamsreview@adamsreview https://ift.tt/3jsXF20 May 11, 2026 at 06:06AM

Show HN: I trained a chess engine to play like humans https://ift.tt/n3xM6Ut

Show HN: I trained a chess engine to play like humans I built 1e4.ai - a chess web app where you play against neural networks trained to mimic human Lichess players at specific Elo ranges. There's a separate model for each 100-point rating bucket from ~800 to 2200+, and the bots not only choose human-like moves but also burn clock time, play worse under time pressure, and blunder in human-like ways. Live demo: https://1e4.ai Code: https://ift.tt/B4DrKcd A few things that might be interesting: - Trained on almost a full year of Lichess blitz games, around 1B total games - Architecture is an a small (~9MM parameters) transformer-based network that takes the board, recent move history, the player's rating, and remaining clock time as input. Three separate models per rating bucket: move, clock-usage, and win probability. The clock model is what makes the bots feel humanish under time pressure rather than instant. Because the move model takes the clock as one input parameter, it also learns to blunder under time pressure like a human might. - Because the network is so tiny, no GPU is needed for inference - it runs easily on a local CPU - Downside of the tiny network is that it's a bit weak as you turn up the rating past around 1700. It can spot short tactics but not long multi-move combinations. - Initial training on a rented 8xH100 cluster, then fine-tunes on my local GPU for different rating ranges - Inspired by Maia-2 and DeepMind's "Grandmaster-Level Chess Without Search". On a held-out Lichess blitz benchmark, the it beats Maia-2 blitz on top-1 move prediction (56.7% vs 52.7%) and pretty substantially on win-probability calibration (Brier 0.176 vs 0.272). Numbers and code in https://ift.tt/8xM3S57... - The data pipeline is C++ via nanobind, then training with Pytorch. Getting this right was actually the thing I spent the most time on. Pre-shuffling the dataset and then being able to read the shuffled dataset sequentially at training time kept the GPU utilization high. Without this it spent a huge percentage of time on I/O while the GPU sat idle. Happy to answer questions about the rating-conditioning, the clock model, or the data pipeline. May 11, 2026 at 02:31AM

Show HN: Hustler Bingo – a tiny bingo game about startup Twitter clichés https://ift.tt/k20Tp1L

Show HN: Hustler Bingo – a tiny bingo game about startup Twitter clichés I built this after my brother started complaining that I got too much into brainrot culture. It's just for fun nothing serious, but was able to test vercel, tanstack start and convex without high stakes. Have fun! This is the game where lower score is goood for your mental health https://ift.tt/Yn1hodf May 11, 2026 at 12:36AM

Show HN: Mosaic – arrange iOS icons by color using an evolutionary algorithm https://ift.tt/JYICAGj

Show HN: Mosaic – arrange iOS icons by color using an evolutionary algorithm It started out as a way for me to freshen up my C++ skills during COVID. But life got in the way and it was put on ice. Luckily, coding LLMs came to the rescue and allowed me to bring it to a point where I feel comfortable sharing it. https://ift.tt/HabRtX0 May 10, 2026 at 10:29PM

Saturday, May 9, 2026

Show HN: Free OSS transcription app I made and found it's faster than wispr flow https://ift.tt/2Yby17V

Show HN: Free OSS transcription app I made and found it's faster than wispr flow title doesn't let nuance, ofc it's not the app that's faster but the way you can use it with Groq inference for example. https://mumbli.app/ May 10, 2026 at 01:37AM

Show HN: Create flashcards with Space CLI https://ift.tt/OCaJzUN

Show HN: Create flashcards with Space CLI Hey, I created seven years ago a flashcard app with a main focus on UX. In the last months I added offline-first mode and a CLI that allows Claude Code or Codex to create high quality flashcards for you. I use that to learn about pharma rules, technology, dancing, taxes and smart home. Never really did marketing, this not my specialty. Would love to know what you think https://ift.tt/2AaUWL0 May 9, 2026 at 06:38PM

Friday, May 8, 2026

Show HN: I mirrored war.gov's UAP archive in pure Rail with verifiable bytes https://ift.tt/I5DK8TV

Show HN: I mirrored war.gov's UAP archive in pure Rail with verifiable bytes https://ift.tt/bzkw43R May 9, 2026 at 03:16AM

Show HN: tltv – Federation protocol for 24/7 TV channels https://ift.tt/q85RIDn

Show HN: tltv – Federation protocol for 24/7 TV channels I spent six years trying to build a tv channel server. rewrote it eight times. flask, fastapi, ffmpeg, gstreamer, named pipes. every version got more complicated and none of them worked right. turns out I was building the wrong thing. the thing I actually wanted was a protocol. so tltv is that. a channel is an ed25519 key pair. you sign your metadata with it. you serve hls video from wherever you want. your public key becomes a tltv:// address that anyone can tune into. relay nodes can re-serve your stream but they can't modify it. they verify signatures on everything. you can move servers and keep your channel because the key is the identity, not the hostname. nodes find each other through peer exchange. no central registry. the cli is probably the fastest way to see what I mean: curl -fsSL timelooptv.org/install | sh tltv keygen tltv server test --name "my channel" -k TV*.key that's a fully compliant origin server. pure go, generates smpte bars with audio, no ffmpeg. one binary, ~20mb of ram. there's also a full gstreamer-based server (cathode), a web viewer (phosphor), and bridge/relay servers in the cli. everything mit licensed. live demo at https://ift.tt/zw0U8Bo https://ift.tt/pYUnV2X https://timelooptv.org/ May 8, 2026 at 11:28PM

Show HN: The independent guide to agent orchestrators https://ift.tt/PimA21d

Show HN: The independent guide to agent orchestrators Hey HN! I built AgentMGMT.dev today to keep track of all those agent orchestration tools that keep popping up. I've tried a few and landed on Superset, which I'm extremely happy (and productive!) with - but I think this category of tools will be extremely important and interesting in the next couple years, so it's worth keeping an eye on all available tools and how they evolve. I will keep the site up-to-date, please help me by submitting new tools that are not yet in the list, or add any details that might help folks who are out shopping for their first/next agent orchestrator! https://agentmgmt.dev/ May 9, 2026 at 01:17AM

Show HN: GETadb.com – every GET request creates a DB https://ift.tt/Z7wST0t

Show HN: GETadb.com – every GET request creates a DB Hey HN! We made GETadb.com, so it's easier to get agents to build you full stack apps. You don't need to give them any credentials. Just by loading a GET request, they get access to a database, a sync engine, and abstractions for auth, presence, and streams. To see what the agent sees, you can load https://getadb.com/new There's two fun things about how it's implemented: 1. If you curl the home page, it the agent content rather than human content. We do this by detecting the 'Sec-Fetch-Mode' header. It's not perfect, but gets the job done for Claude Code et al. 2. For an agent to spin up an app, they make _two_ fethes. (1) getadb.com/guide tells them to generate a uuid, and fetch (2) getadb.com/provision/. We did this, because just about half of the popular web-based app builders cache URLs globally, even if you return no-store headers. To get around this we just instruct the agent to generate unique URLs You may wonder: Why GET requests, rather than POST requests? It's because then you can build in surprising places. For example, we get meta.ai to build an app inside the artifact preview: https://ift.tt/LrKtfv8 Under the hood, this is possible because the whole infra is mult-tenant from ground up. We already announced how that works on HN, but if you're curious here's the essay for it: https://ift.tt/8Cta9jD https://www.getadb.com/ May 8, 2026 at 08:17PM

Thursday, May 7, 2026

Show HN: Kstack – Skill pack for monitoring/troubleshooting K8s in Claude Code https://ift.tt/GiXwcPb

Show HN: Kstack – Skill pack for monitoring/troubleshooting K8s in Claude Code Hi All, Recently I've been using Claude Code a lot for debugging cluster issues and I realized I was performing similar tasks repeatedly so I decided to package them up into skills so I could call them up more easily (e.g. `/investigate`, `/audit-security`, `/audit-outdated`). I'm calling the skill pack "kstack" and the goal is to be able to monitor and troubleshoot K8s from within Claude Code. If you have time, I'd love to get some feedback on the project! Andres Source: https://ift.tt/ht62L7P Docs: https://kstack.sh/ https://ift.tt/ht62L7P May 7, 2026 at 09:24AM

Show HN: Full Python GUI apps in the browser – no JavaScript, no server https://ift.tt/lmkBEF0

Show HN: Full Python GUI apps in the browser – no JavaScript, no server I have been working on Dear ImGui Bundle since 2022, but it is the first time I talk about it here. It is a framework around Dear ImGui for building interactive applications in Python and C++. It comes with batteries included: Plotting, image inspection, Markdown, node editors, 3D gizmos, knobs, toggles, etc. https://imgui-bundle.pages.dev It now also runs smoothly in the browser via pyodide: The playground below is a python app running in your browser (no server, no JavaScript). You can edit the code on the left and click Run. It even works on mobile. https://imgui-bundle.pages.dev/playground I have a strong interest in providing tools that help others express their creativity. This project aims to be a step in this direction as it helps develop GUIs where the code is extremely readable & hackable. Some of the goals it addresses: - Bring true Immediate Mode GUI to Python and C++ - A versatile range of high quality libraries: Widgets, Plots, Image Analysis, Node edition, markdown rendering - Multiplatform apps in C++: works on all platform in C++ (desktop, mobile, emscripten) - Deploy python apps to the web - High quality python bindings that are always up-to-date (because they are auto-generated) - Smooth transition between C++ and Python (same APIs for both) I'd be happy to answer questions! https://ift.tt/rdm4hVJ May 7, 2026 at 09:36PM

Show HN: Bilig – a headless spreadsheet engine for Node services and agents https://ift.tt/d02ALIk

Show HN: Bilig – a headless spreadsheet engine for Node services and agents https://ift.tt/Cug5LO4 May 7, 2026 at 10:16PM

Wednesday, May 6, 2026

Show HN: PHP-fts – Full-text search engine in pure PHP, no extensions https://ift.tt/X39hy1F

Show HN: PHP-fts – Full-text search engine in pure PHP, no extensions https://ift.tt/WRiKSY6 May 7, 2026 at 12:28AM

Show HN: Mac Juice Monitor – Bluetooth battery levels in the macOS menu bar https://ift.tt/HX1Wcbm

Show HN: Mac Juice Monitor – Bluetooth battery levels in the macOS menu bar https://ift.tt/WY2Z6LG May 6, 2026 at 11:28PM

Show HN: Red Squares – GitHub outages as contributions https://ift.tt/qI0bWkS

Show HN: Red Squares – GitHub outages as contributions https://red-squares.cian.lol/ May 6, 2026 at 02:28PM

Show HN: Rdprrap – Rust Port of RDPWrap (Multi-Session RDP for Windows Desktop) https://ift.tt/m24CHXe

Show HN: Rdprrap – Rust Port of RDPWrap (Multi-Session RDP for Windows Desktop) https://ift.tt/uIOWGRj May 6, 2026 at 11:30AM

Tuesday, May 5, 2026

Show HN: Better Design – 28 Shadcn design systems (OSS, MCP: Cursor/Claude Code) https://ift.tt/8WqXyNJ

Show HN: Better Design – 28 Shadcn design systems (OSS, MCP: Cursor/Claude Code) https://ift.tt/nufz7Zw May 6, 2026 at 03:31AM

Show HN: New Benchmark from SWE-bench team is 0% solved https://ift.tt/v8jS2GI

Show HN: New Benchmark from SWE-bench team is 0% solved https://ift.tt/prCFfYk May 5, 2026 at 07:10PM

Monday, May 4, 2026

Sunday, May 3, 2026

Show HN: VidMark – Frame.io-style timestamped comments for Google Drive https://ift.tt/QSn5Ym6

Show HN: VidMark – Frame.io-style timestamped comments for Google Drive https://ift.tt/AXIk5dv May 4, 2026 at 12:59AM

Show HN: Interpretable AutoResearch – Legible Agent Workflows https://ift.tt/lzwPCuS

Show HN: Interpretable AutoResearch – Legible Agent Workflows https://ift.tt/uOT0qj8 May 4, 2026 at 12:15AM

Show HN: Tyche: An experimental distributed trading pipeline in Go Java https://ift.tt/P3agB1v

Show HN: Tyche: An experimental distributed trading pipeline in Go Java https://ift.tt/CtkBRgv May 3, 2026 at 11:43PM

Saturday, May 2, 2026

Show HN: Golang binaries built for your users depending on their arch and system https://ift.tt/R2hbgNm

Show HN: Golang binaries built for your users depending on their arch and system https://goblin.run April 30, 2026 at 06:13PM

Show HN: Use an Android Phone as an HTTP Proxy https://ift.tt/OFzwnha

Show HN: Use an Android Phone as an HTTP Proxy I created a simple project to allow you to use a phone as a web proxy. This is not a proxy for the phone, its a way to proxy web traffic from elsewhere via the phone. One practical use case is accessing geo-restricted content. If you have a trusted contact in the country with an Android phone, this can serve as a simple alternative to a commercial VPN. To set it up you need to run a proxy server which can run as a docker container. You then need to install the app on the Android phone which will connect to the server. Finally you configure a browser to use the proxy server as the HTTP/HTTPS proxy. More details here: https://ift.tt/SvMYelr Let me know how you go and if you run into any issues. https://ift.tt/WRTdYU2 May 3, 2026 at 04:14AM

Show HN: State of the Art of Coding Models, According to Hacker News Commenters https://ift.tt/4AsPfdm

Show HN: State of the Art of Coding Models, According to Hacker News Commenters Hello HN, I was away from my computer for two weeks, and after coming back and reading the latest discussions on HN about coding assistants (models, harnesses), I felt very out of the loop. My normal process would have been to keep reading and figure out the latest and greatest from people's comments, but I wanted to try and automate this process. Basically the goal is to get a quick overview over which coding models are popular on HN. A next iteration could also scan for harnesses that people use, or info on self-hosting or hardware setups. I wrote a short intro on the page about the pipeline that collects and analyzes the data, but feel free to ask for more details or check the Google Sheet for more info. https://hnup.date/hn-sota https://hnup.date/hn-sota May 3, 2026 at 01:25AM

Show HN: Clipmon is a macOS clipboard manager on steroids https://ift.tt/ig2S7Xo

Show HN: Clipmon is a macOS clipboard manager on steroids https://ift.tt/p6wGWyC May 3, 2026 at 12:29AM

Friday, May 1, 2026

Show HN: Turn Docker Compose files into airgap-ready UDS Packages https://ift.tt/n3dykAt

Show HN: Turn Docker Compose files into airgap-ready UDS Packages https://ift.tt/yu0T6h8 May 2, 2026 at 01:25AM

Show HN: Destiny – Claude Code's fortune Teller skill https://ift.tt/AmLTeuh

Show HN: Destiny – Claude Code's fortune Teller skill Destiny is the Claude Code's plugin that gives you a real fortune reading. Type /destiny to see today's destiny! It uses the actual classical East Asian astrology system. You enter your birthday once, then /destiny gives you today's reading anytime. Two layers, kept honest: 1. The numbers (your eight-character birth chart, today's day pillar, the hexagram for the moment, five-element relationships) are computed by a Python script. Same person + same day = identical output. You can verify against any traditional calendar source. 2. The prose (today's stars, character sketch, life arc, advice) is written by Claude, applying centuries-old reading conventions to that fixed data. Not LLM-hallucinated horoscope. If you have fun with it, a star would mean a lot. https://ift.tt/C8Qu0HS May 1, 2026 at 11:56PM

Show HN: GhostBox – Borrow a disposable little machine from the Global Free Tier https://ift.tt/Zb3dL4U

Show HN: GhostBox – Borrow a disposable little machine from the Global Free Tier I built this because I was always creating machines on GH actions to test builds on different OS, and I wanted a tight CLI that could do it. I always saw Actions as this great resources and ephemeral machines you could do dev work in just were a natural way for me to work, so this grew out of that workflow. I didn't expect it to blow up, so it wasn't 100% finished when I posted it. But it should stabilize pretty quickly. Happy to know what you think and talk about it. https://ift.tt/fBYAbFc May 1, 2026 at 06:52PM

Thursday, April 30, 2026

Show HN: Pu.sh – a full coding-agent harness in 400 lines of shell https://ift.tt/Y81VNdT

Show HN: Pu.sh – a full coding-agent harness in 400 lines of shell I originally was just messing with pi-autoresearch. Gave it a sample task to build the most portable coding agent. First cut was 6 KB of shell. Great for one-shots, unusable interactively. I was shocked it actually worked. Started building up -- adding features — but with a self-imposed rule: no new dependencies, and sub 500 LOC. This thing had to be truly portable. Just sh, curl, awk. System primitives only. Which means I did some genuinely disgusting things in awk, including JSON parsing and the OpenAI Responses tool loop with reasoning items carried across turns. It's now ~400 lines. In the box: Anthropic + OpenAI, 7 tools (bash, read, write, edit, grep, find, ls), REPL, auto-compaction, checkpoint/resume, pipe mode, 90 no-API tests. Not in the box: TUI, streaming, images, OAuth, Windows, dignity. Two honest things: 1. I stole/modified the system prompt and the architecture. Pi/Claude/Codex wrote the awk. I cannot read most of this code. This wasn't possible for me a year ago. 2. Heavily inspired by Pi (pi.dev) — same 7-tool surface, same exact-text edit model. Credit where it's due. Pi is awesome -- you should probably use them. The agent loop itself is tiny. Almost everything else in a "real" agent CLI is DX and hardening. You can probably build your own harness exactly how you like it. Mario Zechner's AI Engineer talk on taking back control of your tools nudged me here. The name is because it's a .sh file. The other thing it sounds like is, regrettably, also accurate. https://pu.dev/ May 1, 2026 at 12:55AM

Show HN: Free no-signup site auditor – secrets, subdomain takeover, CVEs https://ift.tt/FmaR0X1

Show HN: Free no-signup site auditor – secrets, subdomain takeover, CVEs https://ift.tt/Ii4Dec9 May 1, 2026 at 12:04AM

Show HN: Exploding Hamsters https://ift.tt/0skLFTi

Show HN: Exploding Hamsters https://ift.tt/bjSEBqc April 30, 2026 at 10:50PM

Wednesday, April 29, 2026

Show HN: A Multi User Multi Task Board MCP Server https://ift.tt/SQYAZma

Show HN: A Multi User Multi Task Board MCP Server I built a simple multi user, multi board, Task/Kanban MCP server. I have been looking for something like this to manage development agents, but I wasn't seeing anything that felt like what I wanted. So I set down and decided to vibe code an alternative. While it was an experiment at first I have been using it daily for my personal development projects and I really think there are others who might be looking for exactly this. It's 100% a WIP, but it is also very usable. I have a demo instance running at https://mootasks.dev . If you find this interesting I'd appreciate a star. This is really the first thing I built that I felt would be of interest to others. The readme explains it, but if you have docker you can get this running in a couple minutes. It's helped my workflow a lot and I plan on continuing to add features / improve it. https://ift.tt/1B5fU4v April 29, 2026 at 11:41PM

Show HN: Generative UI Library for React https://ift.tt/4PHUpJG

Show HN: Generative UI Library for React https://ift.tt/jEQ5tod April 29, 2026 at 11:28PM

Tuesday, April 28, 2026

Show HN: A TUI for Markdown view an editing https://ift.tt/2sXLTHu

Show HN: A TUI for Markdown view an editing Hi HN, I built a simple TUI for viewing and editing .md files in the terminal. More and more markdown files keep appearing in our projects, and I found myself needing a quick way to view(with syntax highlighting) and edit them without leaving the terminal, so I built this https://mdee.bkh.dev April 28, 2026 at 11:56PM

Show HN: Drive any macOS app in the background without stealing the cursor https://ift.tt/H8hPNme

Show HN: Drive any macOS app in the background without stealing the cursor Hi HN, Francesco from Cua here. I hacked this project together last weekend, inspired by the Codex Computer-Use release and lessons learned from deploying GUI-operating agents for our customers. The main problem: when a UI automation process controls a desktop app today, it usually takes over the human’s session. Your cursor moves, keyboard focus gets stolen, windows jump to the front, and you have to stop working until the agent is done. That is why we have historically avoided encouraging users to run these processes directly on their host machine, instead relying on VMs or GUI containers for concurrency and background execution. But computer-use - the tools we give agents to operate computers like humans - does not scale cleanly that way. As models get smarter, agents need to share hosts safely, run in the background, and avoid collisions with the human or other agents using the same machine. We realized macOS has no first-class API for "drive this app without touching the cursor". CGEventPost routes through the hardware input stream, so it moves your cursor. CGEvent.postToPid avoids the cursor warp, but Chromium treats those events as untrusted and silently drops clicks at the renderer boundary. Activating the target app first raises the window and pulls focus, defeating the point of background execution. Cua Driver is our attempt at a real fix: a background computer-use driver for macOS that lets an agent click, type, scroll, and read native apps while your cursor, frontmost app, and Space stay where they are. The default interface is a CLI, so it is easy to script or call from any coding agent shell. Try it on macOS 14+: /bin/bash -c "$(curl -fsSL https://ift.tt/ArHFJvo... )" The first internal use case was delegated demo recording. We ask Claude Code to drive an app while 'cua-driver recording start' captures the trajectory, screenshots, actions, and click markers. The result is an agent-generated product demo, Screen Studio inspired. Other things we have used it for: - Replacing Vercel’s agent-browser and other browser-use CLIs. With Claude Code and Cua Driver, you do not need Chrome DevTools Protocol at all. - A dev-loop QA agent that reproduces a visual bug, edits code, rebuilds, and verifies the UI while my editor stays frontmost. - Personal-assistant flows that use iMessage from Claude Code, Hermes, or other general-purpose agent CLIs. - Pulling visual context from Chrome, Figma, Preview, or YouTube windows I am not looking at, without relying on their APIs. What made this harder than expected: - CGEventPost warps the cursor because it goes through the HID stream. - CGEvent.postToPid does not warp the cursor, but Chromium drops it at the renderer IPC boundary. - Activating the target first raises the window and can drag you across Spaces. - Electron apps stop keeping useful AX trees alive when windows are occluded without a private remote-aware SPI. The unlock was SkyLight. SLEventPostToPid is a sibling of the public per-PID call, but it travels through a WindowServer channel Chromium accepts as trusted. Pair it with yabai’s focus-without-raise pattern, plus an off-screen primer click at (-1, -1), and the click lands without the window ever raising. One thing we learned: the right addressing mode depends on the app. Native macOS apps usually have rich AX trees, Chromium-family apps often need a hybrid of AX and screenshots, and apps like Blender or CAD tools may expose almost no useful AX surface. The mistake is defaulting to pixels everywhere - or defaulting to AX everywhere. Long technical writeup: https://ift.tt/r3gZOMA... I would like feedback from people building Mac automation, agent harnesses, or accessibility tooling. If it breaks on an macOS app you care about, that is useful data for us. https://ift.tt/qGvIKUE April 28, 2026 at 08:03PM

Monday, April 27, 2026

Show HN: Waiting for LLMs Suck – Give your user a game https://ift.tt/LnaipCd

Show HN: Waiting for LLMs Suck – Give your user a game Give your user a game while they wait for the LLM to return a result. https://ift.tt/XqPaWSB April 28, 2026 at 06:45AM

Show HN: AgentSwift – Open-source iOS builder agent https://ift.tt/7bZER6H

Show HN: AgentSwift – Open-source iOS builder agent I'm working on a coding agent for building iOS apps. It's built on openspec and xcodebuildmcp. It's free and open source. https://ift.tt/CHy1kth April 28, 2026 at 05:14AM

Show HN: 49Agents – Infinite canvas IDE for AI agents https://ift.tt/M6gPBTU

Show HN: 49Agents – Infinite canvas IDE for AI agents https://ift.tt/R9NfLEJ April 28, 2026 at 04:36AM

Sunday, April 26, 2026

Saturday, April 25, 2026

Show HN: Odozi – open-source iOS journaling app https://ift.tt/XGjbig8

Show HN: Odozi – open-source iOS journaling app Yeah I know I hate the name too but I wasn't about to pay up for odyssey.app. It's an open source project so feel free to poke around with it / fork it. I talk about it more on the marketing website, but a few of us have been using it for the past month and kind of fun. Obviously there will be a slew of issues / feedback / nits that come from this, but c'est la vie. GH is here: https://ift.tt/yl7JRt5 https://odozi.app April 25, 2026 at 07:52PM

Friday, April 24, 2026

Show HN: I'm 15 and built a cryptographic accountability layer for AI agents https://ift.tt/VSo86tr

Show HN: I'm 15 and built a cryptographic accountability layer for AI agents i'm 15 and a sophomore in high school in california. for the past two weeks i've been building a protocol that lets you prove what an AI agent actually did. not just log it. prove it. signed receipts before and after each action, hash chained, verifiable by anyone. this week microsoft merged my code into their agent governance toolkit. twice. happy to answer questions about how it works. https://ift.tt/Du7cfB4 April 24, 2026 at 10:56PM

Show HN: #1 On This Day https://ift.tt/VP5zOyb

Show HN: #1 On This Day https://onthisday-theta.vercel.app April 24, 2026 at 08:12PM

Thursday, April 23, 2026

Show HN: Agent Vault – Open-source credential proxy and vault for agents https://ift.tt/2pN0JCV

Show HN: Agent Vault – Open-source credential proxy and vault for agents Blog post: https://ift.tt/4W29sw3... https://ift.tt/jc9Vh0Z April 22, 2026 at 08:25PM

Show HN: Tron Hilbert Curve Macro https://ift.tt/4t1JiLE

Show HN: Tron Hilbert Curve Macro is it useful? probably not! https://ift.tt/PTjD3YB April 24, 2026 at 12:24AM

Show HN: AgentSearch – Self-hosted search and MCP for AI agents, no API keys https://ift.tt/U8q7Qxu

Show HN: AgentSearch – Self-hosted search and MCP for AI agents, no API keys https://ift.tt/IjF4QXA April 23, 2026 at 10:25PM

Wednesday, April 22, 2026

Show HN: Ghost Pepper Meet local meeting transcription and diarization https://ift.tt/TcEMvKs

Show HN: Ghost Pepper Meet local meeting transcription and diarization 100% local & private transcription engine for macOS. Captures & does speaker diarization. Originally was building as its own app, but can leverage same local models from my original push-to-talk voice transcription product so combined them into one app. https://matthartman.github.io/ghost-pepper/ April 22, 2026 at 11:19PM

Show HN: One ESLint rule to kill the "ChatGPT em dash" in your codebase https://ift.tt/3YVZ9AB

Show HN: One ESLint rule to kill the "ChatGPT em dash" in your codebase https://ift.tt/RvswBVd April 22, 2026 at 11:57PM

Tuesday, April 21, 2026

Show HN: Agent Brain Trust, customisable expert panels for AI agents https://ift.tt/ZksgPn4

Show HN: Agent Brain Trust, customisable expert panels for AI agents Agent Brain Trust lets you summon a panel of real, named experts to critique your architecture, review your writing, pressure your product strategy, or debate your design patterns. 10 built-in trusts, an extensible roster, and a working turn-taking protocol that ensures nothing useful gets skipped. Guest experts are drafted via an MCP server that maps topics to real persona cards so the panel can reach into niche and novel territory without inventing expertise it does not have. Wrote up the full thinking here: https://tinyurl.com/agent-brain-trust https://ift.tt/32yEk5I April 22, 2026 at 03:03AM

Show HN: Linux installer .exe without pendrives (secure-boot compatible) https://ift.tt/SLsBOF4

Show HN: Linux installer .exe without pendrives (secure-boot compatible) It is fairly simple, but useful for non-power users: a statically-linked Qt app that gracefully edits Windows Boot Manager to do a reboot-once into GRUB, which in turn launches ZorinOS live/installer from disk loopback. I needed to make a custom initramfs with dislocker in it, and to squeeze it a little (default Windows EFI partition is not so big). Currently my biggest issue is that systemd is unable to shut down gracefully when rootfs is mounted from such a deep loop stack. The other is that installing to full disk crashes right after it successfully damages the partition table. I think I could solve both by copying the ISO contents to ramdisk before systemd takes over (initramfs needs to stay slim as I mentioned before) - maybe opportunistically if there is enough RAM (3,5G is not so unreasonable to expect). I decided to make it paid for now (maybe not eligible for show? if so, sorry, I've been mostly a HN reader so far), but I am still considering if my project brought more good by being free of charge. It is free as in GPL3+ though, so although I can politely ask you not to exercise it too hard until I turn Bad™, you (will) have several ways to obtain it without paying anyway. The website itself (especially payments) is also an interesting story, I can share it too some day. What do you think? https://ift.tt/sJqGgQU April 22, 2026 at 03:03AM

Show HN: Almanac MCP, turn Claude Code into a Deep Research agent https://ift.tt/h5qkzni

Show HN: Almanac MCP, turn Claude Code into a Deep Research agent I am Rohan, and I have grown really frustrated with CC's search and read tools. They use Haiku to summarise all the search results, so it is really slow and often ends up being very lossy. I built this MCP that you can install into your coding agents so they can actually access the web properly. Right now it can: - search the general web - search Reddit - read and scrape basically any webpage Install it: npx openalmanac setup The MCP is completely free to use. We have also built a central store where you can contribute things you learned while exploring. If you find something useful, you can contribute it to the encyclopedia we're building at Almanac using the same MCP. https://ift.tt/vDlFMAp April 22, 2026 at 02:12AM

Show HN: Backlit Keyboard API for Python https://ift.tt/NghxFS9

Show HN: Backlit Keyboard API for Python It currently supports Linux as of now. You can use this package to tinker with many things. Let's say, if you want to make a custom notification system, like if your website is down, you can make a blink notification with it. MacOS support is underway. I haven't tested Windows yet, I don't use it anymore btw. In future, if this package reaches nice growth, I'll be happy to make a similar Rust crate for it. https://ift.tt/sH8MdzN April 19, 2026 at 10:52AM

Monday, April 20, 2026

Show HN: I Built SwiftUI but for macOS MDM https://ift.tt/DrviOhA

Show HN: I Built SwiftUI but for macOS MDM https://ift.tt/4k1MPRx April 21, 2026 at 12:57AM

Show HN: Simple CLI tool to convert PDFs to dark mode, with TOC preservation https://ift.tt/JMPyVOS

Show HN: Simple CLI tool to convert PDFs to dark mode, with TOC preservation Hi HN, I made a little something that could be useful to those like me that read pdfs at night. https://ift.tt/0JCZ7gd April 21, 2026 at 12:22AM

Show HN: Git Push No-Mistakes https://ift.tt/w4UPLDR

Show HN: Git Push No-Mistakes no-mistakes is how I kill AI slop. It puts a local git proxy in front of my real remote. I push to no-mistakes instead of origin, and it spins up a disposable worktree, runs my coding agent as a validation pipeline, forwards upstream only after every check passes, opens a clean PR automatically, and babysits CI pipeline for me. https://ift.tt/OPdxvbj April 20, 2026 at 10:40PM

Sunday, April 19, 2026

Show HN: How context engineering works, a runnable reference https://ift.tt/rEBgj3Q

Show HN: How context engineering works, a runnable reference I've been presenting at local meetups about Context Engineering, RAG, Skills, etc.. I even have a vbrownbag coming up on LinkedIn about this topic so I figured I would make a basic example that uses bedrock so I can use it in my talks or vbrownbags. Hopefully it's useful. https://ift.tt/iKnxm2z April 17, 2026 at 10:20PM

Show HN: Newsmaps.io a map of how news topics are covered by different countries https://ift.tt/HW3FCY1

Show HN: Newsmaps.io a map of how news topics are covered by different countries https://ift.tt/dnpCa6U April 20, 2026 at 01:02AM

Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/V5spmgC

Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/Jj5uQKN April 19, 2026 at 08:59PM

Show HN: Free PDF redactor that runs client-side https://ift.tt/RGUM9d6

Show HN: Free PDF redactor that runs client-side I recently needed to verify past employment and to do so I was going to upload paystubs from a previous employer, however I didn't want to share my salary in that role. I did a quick search online and most sites required sign-up or weren't clear about document privacy. I conceded and signed up for a free trial of Adobe Acrobat so I could use their PDF redaction feature. I figured there should be a dead simple way of doing this that's private, so I decided to create it myself. What this does is rasterize each page to an image with your redactions burned in, then it rebuilds the PDF so the text layer is permanently destroyed and not just covered up and easily retrievable. I welcome any and all feedback as this is my first live tool, thanks! https://redactpdf.net April 19, 2026 at 10:39PM

Saturday, April 18, 2026

Show HN: AI Subroutines – Run automation scripts inside your browser tab https://ift.tt/U4DT1SK

Show HN: AI Subroutines – Run automation scripts inside your browser tab We built AI Subroutines in rtrvr.ai. Record a browser task once, save it as a callable tool, replay it at: zero token cost, zero LLM inference delay, and zero mistakes. The subroutine itself is a deterministic script composed of discovered network calls hitting the site's backend as well as page interactions like click/type/find. The key architectural decision: the script executes inside the webpage itself, not through a proxy, not in a headless worker, not out of process. The script dispatches requests from the tab's execution context, so auth, CSRF, TLS session, and signed headers get added to all requests and propagate for free. No certificate installation, no TLS fingerprint modification, no separate auth stack to maintain. During recording, the extension intercepts network requests (MAIN-world fetch/XHR patch + webRequest fallback). We score and trim ~300 requests down to ~5 based on method, timing relative to DOM events, and origin. Volatile GraphQL operation IDs are detected and force a DOM-only fallback before they break silently on the next run. The generated code combines network calls with DOM actions (click, type, find) in the same function via an rtrvr.* helper namespace. Point the agent at a spreadsheet of 500 rows and with just one LLM call parameters are assigned and 500 Subroutines kicked off. Key use cases: - record sending IG DM, then have reusable and callable routine to send DMs at zero token cost - create routine getting latest products in site catalog, call it to get thousands of products via direct graphql queries - setup routine to file EHR form based on parameters to the tool, AI infers parameters from current page context and calls tool - reuse routine daily to sync outbound messages on LinkedIn/Slack/Gmail to a CRM using a MCP server We see the fundamental reason that browser agents haven't taken off is that for repetitive tasks going through the inference loop is unnecessary. Better to just record once, and get the LLM to generate a script leveraging all the possible ways to interact with a site and the wider web like directly calling backed API's, interacting with the DOM, and calling 3P tools/APIs/MCP servers. https://ift.tt/50OqDWg April 18, 2026 at 01:03AM

Show HN: WebGL Liminal Space https://ift.tt/y7LlFDA

Show HN: WebGL Liminal Space Fun little liminal space game I made this week learning webGL. https://ift.tt/YPyFgtD https://liminal-dwsw5.ondigitalocean.app/ April 18, 2026 at 09:10PM

Friday, April 17, 2026

Show HN: Mind-OS – First free online AI dependency self‑assessment https://ift.tt/SpGcJ0k

Show HN: Mind-OS – First free online AI dependency self‑assessment https://iamalex-afk.github.io/human-os-patch-33-protocols/ April 18, 2026 at 01:40AM

Show HN: Pyra – a Python toolchain experiment inspired by uv and Bun https://ift.tt/hAziFx6

Show HN: Pyra – a Python toolchain experiment inspired by uv and Bun I’ve been working on Pyra for the past few months and wanted to start sharing it in public. Right now it’s focused on the core package/project management workflow: Python installs, init, add/remove, lockfiles, env sync, and running commands in the managed env. The bigger thing I’m exploring is whether Python could eventually support a more cohesive toolchain story overall, more in the direction of Bun: not just packaging, but maybe over time testing, tasks, notebooks, and other common workflow tools feeling like one system instead of a bunch of separate pieces. It’s still early, and I’m definitely not claiming it’s as mature as uv. I’m mostly sharing it now because I want honest feedback on whether the direction feels interesting or misguided. https://ift.tt/6vVEui9 April 18, 2026 at 01:50AM

Show HN: I turned my MacBook notch into a live Claude Code dashboard https://ift.tt/mpKytzI

Show HN: I turned my MacBook notch into a live Claude Code dashboard https://ift.tt/nZfM1lD April 17, 2026 at 07:43PM

Thursday, April 16, 2026

Show HN: Free API and widget to look up US representatives https://ift.tt/yvN8Qr7

Show HN: Free API and widget to look up US representatives https://ift.tt/H5Io4tO April 17, 2026 at 04:45AM

Show HN: Spice simulation → oscilloscope → verification with Claude Code https://ift.tt/PnFiGEf

Show HN: Spice simulation → oscilloscope → verification with Claude Code I built MCP servers for my oscilloscope and SPICE simulator so Claude Code can close the loop between simulation and real hardware. https://ift.tt/rezyX0m April 17, 2026 at 04:37AM

Show HN: Tracking Top US Science Olympiad Alumni over Last 25 Years https://ift.tt/6N3mvZk

Show HN: Tracking Top US Science Olympiad Alumni over Last 25 Years Interesting to see that the entrepreneurs from more recent years tend to be doing well relative to years prior. Some interesting future directions could be: - Expanding search to be global and include more competitions, like biology and chemistry - Improving search so less unknown results - Showing insights, like trends over the years Kudos to Perplexity Computer for making this https://ift.tt/wHUnbuk April 17, 2026 at 02:02AM

Show HN: Marky – A lightweight Markdown viewer for agentic coding https://ift.tt/orYwqKj

Show HN: Marky – A lightweight Markdown viewer for agentic coding Hey HN, In this age of agentic coding I've found myself spending a lot of time reviewing markdown files. Whether it's plans or documentation that I've asked my agent to generate for me, it seems that I spend more time reading markdown than code. I've tried a few different solutions to make it easier to read such as Obsidian however I've found their Vault system to be quite limiting for this use case and I've found TUI solutions to not quite be as friendly to read as I've wanted so I made Marky. Marky is a lightweight desktop application that makes it incredibly easy to read and track your markdown files. It also has a helpful cli so you can just run marky FILENAME and have the app open to the md file that you pointed it at. I've been using the daily over the past week and I really enjoy it so I figured I'd share it. Here's a video if you want to check out a demo: https://www.youtube.com/watch?v=nGBxt8uOVjc . I have plans to add more features such as incorporating agentic tools such as claude code and codex into the UI as well as developing a local git diff reviewer to allow me to do local code review before pushing up to git. I'd love to hear your thoughts and any feature suggestions you may have :) https://ift.tt/GBA8cto April 16, 2026 at 08:08PM

Wednesday, April 15, 2026

Show HN: I built a Wikipedia based AI deduction game https://ift.tt/OY7WGhs

Show HN: I built a Wikipedia based AI deduction game I haven't seen anything like this so I decided to build it in a weekend. How it works: You see a bunch of things pulled from Wikipedia displayed on cards. You ask yes or no questions to figure out which card is the secret article. The AI model has access to the image and wiki text and it's own knowledge to answer your question. Happy to have my credits burned for the day but I'll probably have to make this paid at some point so enjoy. I found it's not easy to get cheap+fast+good responses but the tech is getting there. Most of the prompts are running through Groq infra or hitting a cache keyed by a normalization of the prompt. https://ift.tt/H3UMtd0 April 16, 2026 at 04:13AM

Show HN: US keyboards don't have enough keys, so I switched to Japanese https://ift.tt/2qixGlD

Show HN: US keyboards don't have enough keys, so I switched to Japanese https://ift.tt/nxO8YSF April 16, 2026 at 12:57AM

Show HN: Jeeves – TUI for browsing and resuming AI agent sessions https://ift.tt/CjJAs0b

Show HN: Jeeves – TUI for browsing and resuming AI agent sessions I made Jeeves to search, preview, read through, and resume AI agent sessions in your terminal. It shows sessions across claude and codex in a single view, with more AI agent framework integrations to come. https://ift.tt/I9eNEw3 April 15, 2026 at 11:31PM

Tuesday, April 14, 2026

Show HN: Uninum – All elementary functions from a single operator, in Python https://ift.tt/CNxJ1me

Show HN: Uninum – All elementary functions from a single operator, in Python https://ift.tt/6l3Fp8q April 15, 2026 at 01:46AM

Show HN: Send physical postcards from your coding harness https://ift.tt/VmG5QbO

Show HN: Send physical postcards from your coding harness https://ift.tt/h0Wf7b8 April 14, 2026 at 11:47PM

Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills https://ift.tt/XxJ4qp6

Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills https://ift.tt/cv4jKN9 April 14, 2026 at 11:37PM

Monday, April 13, 2026

Show HN: Encrypted, nothing stored, nothing repeated face-gated asset sharing https://ift.tt/jHdr9pQ

Show HN: Encrypted, nothing stored, nothing repeated face-gated asset sharing https://veylt.net/ April 13, 2026 at 10:10PM

Show HN: pg_grpc – Call gRPC services directly from PostgreSQL https://ift.tt/Sb5K6EO

Show HN: pg_grpc – Call gRPC services directly from PostgreSQL https://ift.tt/xCqtlGz April 13, 2026 at 09:50PM

Sunday, April 12, 2026

Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://ift.tt/vMmKIpY

Show HN: Stork – MCP server so Claude/Cursor can search 14k MCP servers AI tools https://www.stork.ai April 12, 2026 at 11:49PM

Show HN: A social feed with no strangers https://ift.tt/tADeBTO

Show HN: A social feed with no strangers Grateful is a gratitude app with a simple social layer. You write a short entry, keep it private or share it to a circle. A circle is a small private group of your own making — family, close friends, whoever you'd actually want to hear from. It shows you the most recent post first. People in the circle can react or leave a comment. There's also a daily notification that sends you something you were grateful for in the past. Try it out on both iOS and Android. Go to grateful.so https://ift.tt/GJzaDiS April 13, 2026 at 02:41AM

Show HN: Rekal – Long-term memory for LLMs in a single SQLite file https://ift.tt/B8vVTAj

Show HN: Rekal – Long-term memory for LLMs in a single SQLite file I got tired of repeating myself to my LLM every session. rekal is an MCP server that stores memories in SQLite and retrieves them with hybrid search (BM25 + vectors + recency decay). One file, local embeddings, no API keys. https://ift.tt/HNjTn7U April 13, 2026 at 01:25AM

Saturday, April 11, 2026

Show HN: Bitcoin and Quantum Computing – a three-part research series https://ift.tt/uXVvBGr

Show HN: Bitcoin and Quantum Computing – a three-part research series https://bitcoinquantum.space April 11, 2026 at 11:17PM

Show HN: A living Vancouver. Connor is walking dogs at the SPCA this morning https://ift.tt/zsjMcyp

Show HN: A living Vancouver. Connor is walking dogs at the SPCA this morning I've spent most of my career in marketing, which for the last few years has meant building consumer personas for campaigns. I wanted to see if I could make these real, living in real neighborhoods, had real weather, real budgets, real Saturday lunches. I always wanted to build a world, not a segment. This is that. 140 people so far, split across Vancouver (100), San Francisco (20), and Tokyo (20). Each one is about 1,000 lines of profile — family, finances, daily schedule, health, worldview, media diet, the channels you'd actually reach them through and the ones that will explicitly never work on them. Demographics are census-grounded income, age, ethnicity, household composition follow normal distributions against StatsCan, ACS, and Japanese e-Stat data, so the panel is roughly representative of the city instead of representative of whatever's overrepresented in an LLM's training corpus. The specific details come from real stories. They live in real local time on a live map. Right now it's Saturday 11:32 AM in Vancouver. Connor Hughes, a 31-year-old software developer at Clio in Gastown, is on his SPCA volunteer shift, he walks shelter dogs at the Boundary Road location every other Saturday morning. Hassan Khoury is in the morning lunch rush with Tony at his Lebanese café — it's his busiest day of the week. Ahmad Noori is pulling Saturday overtime on a construction site. Jordan Whitehorse is on mid-shift at East Cafe on Hastings. Every day is unique, no two days repeat. A 3 AM job fetches live data: weather from Open-Meteo, grocery CPI from StatsCan food vectors, Metro Vancouver transit delays from Google Routes API against specific corridors, Vancouver gas prices, sunrise and sunset. Each persona has a modifier file that reacts to all of it. When Vancouver gas hits $1.85/L, Jaspreet the long-haul trucker's Coquihalla run to Calgary stops feeling worth it, his margins are thin, his mood takes a hit. When food CPI spikes, Gurinder at the Amazon warehouse stops buying the $9 Subway and brings roti from home. A health flare rolls probabilistically each morning which maybe nothing, maybe Tanya's six month old had a rough night, maybe Frank's back is acting up. The days stack up and get remembered. Every persona has a journal, today's entry in a markdown file, a week of them compressed into a "dream" of ~30 lines that keeps the shape without the texture, a month compressed into ~15 lines. It's their journal. I'm not writing it; the simulation is. Click any persona to open their detail, or hit "Talk to [name]" to have a conversation and they run on Claude Haiku with their full profile and recent diary entries as context. Not a product, not a startup, just a thing I've been quietly working on. They feel, in a way I didn't expect, like my fully grown kids. Happy to answer questions. https://brasilia-phi.vercel.app April 11, 2026 at 10:42PM

Friday, April 10, 2026

Show HN: Unlegacy – document everything, from COBOL to AI generated code https://ift.tt/lqZpdU9

Show HN: Unlegacy – document everything, from COBOL to AI generated code https://ift.tt/nwr692L April 10, 2026 at 08:55PM

Show HN: Eve – Managed OpenClaw for work https://ift.tt/TE04Y3t

Show HN: Eve – Managed OpenClaw for work Eve is an AI agent harness that runs in an isolated Linux sandbox (2 vCPUs, 4GB RAM, 10GB disk) with a real filesystem, headless Chromium, code execution, and connectors to 1000+ services. You give it a task and it works in the background until it's done. I built this because I wanted OpenClaw without the self-hosting, pointed at actual day-to-day work. I’m thinking less personal assistant and more helpful colleague. Here’s a short demo video: https://ift.tt/aKGPpmQ The main interface is a web app where you can watch work happen in real time (agents spawning, files being written, use of the CLI). There's also an iMessage integration so you can fire a task asynchronously, put your phone down, and get a reply when it's finished. Under the hood, there's an orchestrator (Claude Opus 4.6) that routes to the right domain-specific model for each subtask: browsing, coding, research, and media generation. For complex tasks it spins up parallel sub-agents that coordinate through the shared filesystem. They have persistent memory across sessions so context compounds over time. I’ve packaged it with a bunch of pre-installed skills so it can execute in a variety of job roles (sales, marketing, finance) at runtime. Here are a few things Eve has helped me with in the last couple days: - Edit this demo video with a voice over of Garry: https://www.youtube.com/watch?v=S4oD7H3cAQ0 - Do my tax returns - To build HN as if it was the year 2030: https://ift.tt/xp0sg8K AMA on the architecture and lmk your thoughts :) P.S. I've given every new user $100 worth of credits to try it. https://eve.new/login April 10, 2026 at 09:31PM

Show HN: Do All the Things https://ift.tt/uP74yr9

Show HN: Do All the Things https://ift.tt/Ht4E7FG April 10, 2026 at 03:41PM

Thursday, April 9, 2026

Show HN: Druids – Build your own software factory https://ift.tt/kBIqD6y

Show HN: Druids – Build your own software factory Hi HN! Druids ( https://ift.tt/YOyXn7r ) is an open-source library for structuring and running multi-agent coding workflows. Druids makes it easy to do this by abstracting away all the VM infrastructure, agent provisioning, and communication. You can watch our demo video here ( https://www.youtube.com/watch?v=EVJqW-tvSy4 ) to see what it looks like. At a high level: - Users can write Python programs that define what roles the agents take on and how they interact with each other. - A program is made of events - clear state transitions that the agents or clients can call to modify state. Each event gets exposed as an agent tool. - Druids provisions full VMs so that the agents can run continuously and communicate effectively. We made Druids because we were making lots of internal coding tools using agents and found it annoying to have to rearrange the wiring every time. As we were building Druids, we realized a lot of our internal tools were easier to express as an event-driven architecture – separating deterministic control flow from agent behavior – and this design also made it possible to have many agents work reliably. We had issues with scaling the number of concurrent agents within a run, so we decided to have each program run in an isolated sandbox program runtime, kind of the same way you run a Modal function. Each agent then calls the runtime with an agent token, which checks who can talk to who or send files across VMs, and then applies the tool call. Our early users have found the library useful for: - running many agents to do performance optimization - building custom automated software pipelines for eg code review, pentesting, large-scale migrations, etc... We've heard that the frontier labs have the infrastructure to quickly spin up 100 agents and have them coordinate with each other smoothly in various ways. We're hoping that Druids can be a starting point to make that infrastructure more accessible. https://ift.tt/YOyXn7r April 9, 2026 at 12:12AM

Show HN: Git-worm, the simple worktree manager https://ift.tt/SysONL1

Show HN: Git-worm, the simple worktree manager https://ift.tt/xmZEbQi April 9, 2026 at 09:45PM

Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct https://ift.tt/ReTYNfg

Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct Fully open source, a hard fork of cline. Full evals on the github page that compares 7 agents (Cline, Kilo, Ohmypi, Opencode, Pimono, Roo, Dirac) on 8 medium complexity tasks. Each task, each diff and correctness + cost info on the github Dirac is 64.8% cheaper than the average of the other 6. https://ift.tt/E7BIJ8R April 9, 2026 at 04:06PM

Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://ift.tt/FKD7ctr

Show HN: Homebutler – I manage my homelab from chat. AI never gets raw shell https://homebutler.dev April 9, 2026 at 04:09PM

Show HN: CSS Studio. Design by hand, code by agent https://ift.tt/XZGStqD

Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site. Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them. It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor. https://cssstudio.ai April 9, 2026 at 03:23PM

Show HN: Moon simulator game, ray-casting https://ift.tt/4H0ZdlR

Show HN: Moon simulator game, ray-casting Did this a few years ago. Seems apropos. Sources and more here: https://ift.tt/s7jVo36 https://ift.tt/7v3Y9aK April 6, 2026 at 09:09PM

Wednesday, April 8, 2026

Show HN: A website to track live music attendance https://ift.tt/0br4M5h

Show HN: A website to track live music attendance TL;DR: I built a website that allows users to track the concerts they've been to. If you have strong opinions about engineering/design or how shows should be tracked (festivals, venues, etc...), I'd love to get your input! For the past ~5 years, I've been tracking the shows I attend on my personal website ( https://ift.tt/kWDU6fx ). It's fun to see things like distance traveled and how many times I've been to certain venues. I know many friends who also track their shows through notes, ticket stubs, Excel, etc... It always bummed me out that I couldn't pore through their concert data myself... showcount.com is my solution to that desire. It's essentially a public version of my old personal website, where anyone can make an account and manage a show list (mine is https://ift.tt/6S1fUND ). I'm currently on the lookout for other live music lovers and/or data nerds to try out the site and give opinions on various design choices. If any of the following topics are of interest to you, please reach out! - How should venue name/location changes be handled? - How should music festivals be handled? - I have an initial version of an AI parser for loading in existing show lists; how can this be made more robust? - What else should have first-class tracking support (e.g., friends in attendance)? As an aside, this project is also my first experiment with full-on vibe-coding / harness-engineering. I began the project with Cursor and then switched to Claude Code. I've been programming for the better part of a decade, mostly Python and Java. Full-stack development is relatively new to me. I include the tech stack below. Most decisions were made pragmatically based on what I thought would get me to a first version of the site as quickly as possible. - Next.js web app hosted on Vercel - Fast API backend service (for the AI parsing) hosted on Railway - Supabase - Observability through Axiom (logging), PostHog (analytics), and Sentry (monitoring) - Clerk for user authentication - Google Maps API for venue locations - Claude API for the AI parser - Terraform for infra-as-code https://ift.tt/whRByJj April 9, 2026 at 12:42AM

Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/pZMtorQ

Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/Vnjt3xU April 8, 2026 at 04:34PM

Tuesday, April 7, 2026

Show HN: Unicode Steganography https://ift.tt/ydPKAXa

Show HN: Unicode Steganography I built a demo of two Unicode steganography techniques, zero-width characters and homoglyph substitution, in the context of AI misalignment. The first is about the use of two invisible zero-width characters (ZWS and ZWNJ) to binary encode text. The second is much cooler. Most characters in the Latin and Cyrillic alphabets look nearly identical, but have different unicode. If you have text to encode and convert it into binary representation (1s and 0s), you could take plain english "carrier" text and for each 1 in the binary representation you could substitute the Cyrillic letter equivalent. Decoding the message requires traversing the text and seeing where Cyrillic letters could have been substituted but weren't, and where they were, leading to 0s and 1s respectively, which can be built back into your original hidden text. In both cases, these are detectable, but the interesting problem for me is whether an LLM could eventually invent an encoding that goes unnoticed by both us, and automated detection. If LLMs were able to covertly include messages in plaintext, misaligned AI Agents could eventually communicate across MCP/A2A and individual chat session boundaries undetected. A deceptive LLM might seem helpful, but work against your goals. It could tell other agents it interacts with over MCP/A2A to help it discreetly fail, signal intent, and avoid tripping oversight/safety mechanisms. Furthermore, oversight mechanisms become more difficult to implement if we can't believe our own eyes. https://ift.tt/VxOBe4Q April 7, 2026 at 04:57PM

Show HN: Marimo pair – Reactive Python notebooks as environments for agents https://ift.tt/3wNpgYF

Show HN: Marimo pair – Reactive Python notebooks as environments for agents Hi HN! We're excited to share marimo pair [1] [2], a toolkit that drops AI agents into a running marimo notebook [3] session. This lets agents use marimo as working memory and a reactive Python runtime, while also making it easy for humans and agents to collaborate on computational research and data work. GitHub repo: https://ift.tt/yZTs21d Demo: https://www.youtube.com/watch?v=6uaqtchDnoc marimo pair is implemented as an agent skill. Connect your agent of choice to a running notebook with: /marimo-pair pair with me on my_notebook.py The agent can do anything a human can do with marimo and more. For example, it can obtain feedback by running code in an ephemeral scratchpad (inspect variables, run code against the program state, read outputs). If it wants to persist state, the agent can add cells, delete them, and install packages (marimo records these actions in the associated notebook, which is just a Python file). The agent can even manipulate marimo's user interface — for fun, try asking your agent to greet you from within a pair session. The agent effects all actions by running Python code in the marimo kernel. Under the hood, the marimo pair skill explains how to discover and create marimo sessions, and how to control them using a semi-private interface we call code mode. Code mode lets models treat marimo as a REPL that extends their context windows, similar to recursive language models (RLMs). But unlike traditional REPLs, the marimo "REPL" incrementally builds a reproducible Python program, because marimo notebooks are dataflow graphs with well-defined execution semantics. As it uses code mode, the agent is kept on track by marimo's guardrails, which include the elimination of hidden state: run a cell and dependent cells are run automatically, delete a cell and its variables are scrubbed from memory. By giving models full control over a stateful reactive programming environment, rather than a collection of ephemeral scripts, marimo pair makes agents active participants in research and data work. In our early experimentation [4], we've found that marimo pair accelerates data exploration, makes it easy to steer agents while testing research hypotheses, and can serve as a backend for RLMs, yielding a notebook as an executable trace of how the model answered a query. We even use marimo pair to find and fix bugs in itself and marimo [5]. In these examples the notebook is not only a computational substrate but also a canvas for collaboration between humans and agents, and an executable, literate artifact comprised of prose, code, and visuals. marimo pair is early and experimental. We would love your thoughts. [1] https://ift.tt/yZTs21d [2] https://ift.tt/TAL57F6 [3] https://ift.tt/BpTn4ZG [4] https://www.youtube.com/watch?v=VKvjPJeNRPk [5] https://ift.tt/BTbIRsS... https://ift.tt/yZTs21d April 7, 2026 at 09:47PM

Monday, April 6, 2026

Show HN: I successfully failed at one-shot-ing a video codec like h.264 https://ift.tt/RpAETJe

Show HN: I successfully failed at one-shot-ing a video codec like h.264 Read an article yesterday about the H.264 codec increasing their licensing fee by an astronomical amount. And as always, my first shot was how hard could it be to try and build a codec which could be that efficient. I've personally been on a drive to improve my ability to one-shot complex features, products, or make even surgical changes. It's been a few months since I've been doing that, and honestly, results have been great for both work and work/life balance. This was a fun experiment. It burned through tokens, but it helped me identify some more improvements I could make to my one-shot agent teams/swarms, notably in the area of brevity and creating a testing rubric when dealing with domains I don't have prior knowledge in. Ultimately, I did not achieve the compression that I hoped I would, but it was fun seeing the swarm discuss it amongst themselves. https://ift.tt/BAMlQ05 April 4, 2026 at 03:40PM

Sunday, April 5, 2026

Show HN: Sigil – A new programming language for AI agents https://ift.tt/OFpXEtJ

Show HN: Enter an Instagram/TikTok handle, get a data-backed price for collab https://ift.tt/hGdOmqJ

Show HN: Enter an Instagram/TikTok handle, get a data-backed price for collab I had no clue what to offer IG/Tiktok creators for collabs and their offers were too high. That's why built a thing that turns IG profile name into suggested pricing with key metrics and suggestions, looking forward to hearing your feedback! https://ift.tt/LPJ5gVu April 5, 2026 at 10:37PM

Saturday, April 4, 2026

Show HN: SeekLink – Local hybrid search and link discovery for Obsidian vaults https://ift.tt/bKTGOBk

Show HN: SeekLink – Local hybrid search and link discovery for Obsidian vaults https://ift.tt/KsmIxNu April 5, 2026 at 04:18AM

Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://ift.tt/tsWfd6y

Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://contrapunk.com/ April 5, 2026 at 04:40AM

Show HN: Dev Personality Test https://ift.tt/KGsBN34

Show HN: Dev Personality Test Was curious how a personality test would look for developers. So created this using FastAPI, HTMX, and AlpineJS. https://ift.tt/aJm6kYh April 5, 2026 at 01:29AM

Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown https://ift.tt/QzDag7H

Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown The latest 3Blue1Brown video [1] about the M. C. Escher print gallery effect inspired me to re-implement the effect as WebGL fragment shader on my own. [1]: https://www.youtube.com/watch?v=ldxFjLJ3rVY https://ift.tt/TUyzrWu April 4, 2026 at 11:43PM

Friday, April 3, 2026

Show HN: Ismcpdead.com – Live dashboard tracking MCP adoption and sentiment https://ift.tt/Ah3XEFa

Show HN: Ismcpdead.com – Live dashboard tracking MCP adoption and sentiment Built this to track the ongoing debate around Model Context Protocol - whether it's gaining real traction or just hype. Pulls live data from GitHub, HN, Reddit and a few other sources. Curious what the HN crowd thinks given how active the MCP discussion has been here. https://ismcpdead.com April 3, 2026 at 11:28PM

Show HN: Community Curated Lists https://ift.tt/PKgqVTi

Show HN: Community Curated Lists https://ift.tt/24Zz58x April 3, 2026 at 10:32PM

Thursday, April 2, 2026

Show HN: SkiFlee (an HTML5 game) https://ift.tt/SMsKIUJ

Show HN: SkiFlee (an HTML5 game) This is a silly little multiplayer game I made for a gamejam that involves skiiing and not crashing. Some of you who are nostalgic for the 90s might like it :) https://ift.tt/Qn1yxbL April 3, 2026 at 03:30AM

Show HN: Made a little Artemis II tracker https://ift.tt/205dy3u

Show HN: Made a little Artemis II tracker Made a little Artemis II tracker for anyone else who is unnecessarily invested in this mission: https://ift.tt/SZh4NKY For those of us who apparently need a dedicated place to monitor this mission instead of behaving like well-adjusted people. https://ift.tt/SZh4NKY April 3, 2026 at 03:16AM

Show HN: A P2P messenger with dual network modes (Fast and Tor) https://ift.tt/A1wUyf6

Show HN: A P2P messenger with dual network modes (Fast and Tor) Hello HN, I have been working on a desktop P2P messenger called Kiyeovo for the last ~8 months, and I just published its beta version. Quick backstory: It started out as a CLI application for my Graduate Thesis, where I tried to make the most secure and private messenger application possible. Then, I transformed it into a desktop application, gave it "clearnet" support and added a bunch of features. Short summary: The app runs in 2 completely isolated modes: - fast mode: relay/DCUtR -> lower latency, calls support - anonymous mode: Tor message routing -> slower, anonymous These modes use different protocol IDs, DHT namespaces, pubsub topics and storage scopes so there’s no data crossover between them. Messaging works peer-to-peer when both parties are online, but falls back to DHT "offline buckets" when one of them is not. To ensure robustness, messages are ACK-ed and deleted after being read. Group chats use GossipSub for realtime messaging. Group messages are also saved to offline buckets in order for offline users to be able to read them upon logging in. Kick/Join/Leave events are also propagated using the DHT. Group metadata and all offline data is of course encrypted. Other features: Chats are E2E, file sharing is supported, 1:1 audio/video calls are supported (only in fast mode though, using WebRTC) Tradeoffs: Tor has noticeable latency, offline delivery is not immediately guaranteed, but rather "eventually consistent"; beta version does not have group calls yet. I’d appreciate feedback, that's why I posted this as a beta version Repo: https://ift.tt/aLYgoRI https://ift.tt/6BA3Qhx April 2, 2026 at 07:32PM

Show HN: RiceVM – A Dis virtual machine and Limbo compiler in Rust https://ift.tt/xUTyXkF

Show HN: RiceVM – A Dis virtual machine and Limbo compiler in Rust Hi, I've made a Dis virtual machine and Limbo programming language compiler (called RiceVM) in Rust. It can run Dis bytecode (for example, Inferno OS applications), compile Limbo programs, and includes a fairly complete runtime with garbage collection, concurrency features, and many of the standard modules from Inferno OS's original implementation. The project is still in an early stage, but if you're interested in learning more about RiceVM or trying it out, you can check out the links below: Project's GitHub repo: https://ift.tt/9UqIftW RiceVM documentation: https://habedi.github.io/ricevm/ April 2, 2026 at 11:49PM

Wednesday, April 1, 2026

Show HN: Roadie – An open-source KVM that lets AI control your phone https://ift.tt/R4fw0MI

Show HN: Roadie – An open-source KVM that lets AI control your phone Roadie is an open-source hardware KVM controlled via HTTP. HDMI capture in, USB keyboard/mouse/touch out, all from a browser. Hardware KVMs with web UIs have existed for years (PiKVM, TinyPilot, JetKVM, etc.). Roadie adds two things they don't generally have: multi-touch support (so it works with phones and tablets) and a focus on agent-driven use: any browser automation tool can drive the /view page directly, or connect to the WebSocket endpoint for lower-level programmatic control. ~$86 in parts, including two CircuitPython boards, an HDMI-to-USB dongle, and a Go server running on the host. No software needed on the target. https://ift.tt/6l93oez April 1, 2026 at 11:46PM

Show HN: Canon PIXMA G3010 macOS driver, reverse-engineered with Claude https://ift.tt/vWF7uQ1

Show HN: Canon PIXMA G3010 macOS driver, reverse-engineered with Claude Canon doesn't provide a working macOS driver for the PIXMA G3010. I was stuck using Canon's iPhone app for all printing and scanning. I pointed Claude Code at a packet capture from the iPhone app and it reverse-engineered Canon's proprietary CHMP protocol, wrote a pure Rust eSCL-to-CHMP bridge daemon, and built a .pkg installer. My role was the physical parts: capturing packets, testing on the printer, confirming Image Capture worked. The protocol docs in docs/ are probably the first public documentation of Canon's CHMP protocol. https://ift.tt/VKwZlcQ April 1, 2026 at 10:28PM

Tuesday, March 31, 2026

Show HN: How This Graybeard Built the Fastest and Freest Postgres BM25 Search https://ift.tt/qDHprdl

Show HN: How This Graybeard Built the Fastest and Freest Postgres BM25 Search Last summer we faced a conundrum at my company, Tiger Data, a Postgres cloud vendor whose main business is in timeseries data. We were trying to grow our business towards emerging AI-centric workloads and wanted to provide a state-of-the-art hybrid search stack in Postgres. We'd already built pgvectorscale in house with the goal of scaling semantic search beyond pgvector's main memory limitations. We just needed a scalable ranked keyword search solution too. The problem: core Postgres doesn't provide this; the leading Postgres BM25 extension, ParadeDB, is guarded behind AGPL; developing our own extension appeared daunting. We'd need a small team of sharp engineers and 6-12 months, I figured. And we'd probably still fall short of the performance of a mature system like Parade/Tantivy. Or would we? I'd be experimenting long enough with AI-boosted development at that point to realize that with the latest tools (Claude Code + Opus) and an experienced hand (I've been working in database systems internals for 25 years now), the old time estimates pretty much go out the window. I told our CTO I thought I could solo the project in one quarter. This raised some eyebrows. It did take a little more time than that (two quarters), and we got some real help from the community (amazing!) after open-sourcing the pre-release. But I'm thrilled/exhausted today to share that pg_textsearch v1.0 is freely available via open source (Postgres license), on Tiger Data cloud, and hopefully soon, a hyperscalar near you: https://ift.tt/s4KoTzP In the blog post accompanying the release, I overview the architecture and present benchmark results using MS-MARCO. To my surprise, we were not only able to meet Parade/Tantivy's query performance, but exceed it substantially, measuring a 4.7x advantage on query throughput at scale: https://ift.tt/cS6aA2W... It's exciting (and, to be honest, a little unnerving) to see a field I've spent so much time toiling in change so quickly in ways that enable us to be more ambitious in our technical objectives. Technical moats are moats no longer. The benchmark scripts and methodology are available in the github repo. Happy to answer any questions in the thread. Thanks, TJ (tj@tigerdata.com) https://ift.tt/s4KoTzP March 31, 2026 at 08:29PM

Monday, March 30, 2026

Show HN: Rusdantic https://ift.tt/sDR3KI7

Show HN: Rusdantic A unified, high-performance data validation and serialization framework for Rust, inspired by Pydantic's ergonomics and powered by Serde. https://ift.tt/I61KxVg March 31, 2026 at 01:57AM

Show HN: AI Spotlight for Your Computer (natural language search for files) https://ift.tt/gzxYvSk

Show HN: AI Spotlight for Your Computer (natural language search for files) Hi HN, I built SEARCH WIZARD — a tool that lets you search your computer using natural language. Traditional file search only works if you remember the filename. But most of the time we remember things like: "the screenshot where I was in a meeting" "the PDF about transformers" "notes about machine learning" Smart Search indexes your files and lets you search by meaning instead of filename. Currently supports: - Images - Videos - Audio - Documents Example query: "old photo where a man is looking at a monitor" The system retrieves the correct file instantly. Everything runs locally except embeddings. I'm looking for feedback on: - indexing approaches - privacy concerns - features you'd want in a tool like this GitHub: https://ift.tt/TjCGQfq Demo: https://deepanmpc.github.io/SMART-SEARCH/ March 30, 2026 at 07:13PM

Show HN: Memv – Memory for AI Agents https://ift.tt/I78rWjx

Show HN: Memv – Memory for AI Agents memv is an open-source Python library that gives AI agents persistent memory. Feed it conversations; it extracts knowledge. The extraction mechanism is predict-calibrate (Nemori paper): given existing knowledge, it predicts what a new conversation should contain, then extracts only what the prediction missed. v0.1.2 adds the production path: - PostgreSQL backend (pgvector for vectors, tsvector for text search, asyncpg pooling). Single db_url parameter — file path for SQLite, connection string for Postgres. - Embedding adapters: OpenAI, Voyage, Cohere, fastembed (local ONNX). Other things it does: - Bi-temporal validity: event time (when was the fact true) + transaction time (when did we learn it), following Graphiti's model. - Hybrid retrieval: vector similarity + BM25 merged with Reciprocal Rank Fusion. - Episode segmentation: groups messages before extraction. - Contradiction handling: new facts invalidate old ones, with full audit trail. Procedural memory (agents learning from past runs) is next, deferred until there's usage data. https://ift.tt/F0uVy1g March 30, 2026 at 09:09PM

Sunday, March 29, 2026

Show HN: React-Rewrite – Figma for localhost that directly edits your codebase https://ift.tt/ne7AMJj

Show HN: React-Rewrite – Figma for localhost that directly edits your codebase https://ift.tt/odbjiRz March 30, 2026 at 06:59AM

Show HN: Real-time visualization of Claude Code agent orchestration https://ift.tt/Df8UxHe

Show HN: Real-time visualization of Claude Code agent orchestration https://ift.tt/IiZXQBo March 30, 2026 at 06:21AM

Show HN: Tabical – Tinder-style city micro-itineraries, personalized by swipe https://ift.tt/0DGhrK7

Show HN: Tabical – Tinder-style city micro-itineraries, personalized by swipe tabical: swipeable 2-4 stop city itineraries for NYC, DC, and Atlanta. You swipe right or left and a personalization vector updates on each swipe to curate your deck. The backend pipeline is where most of the interesting work lives: currently trending signals are harvested each day, and from those signals we fetch the candidates to build itineraries. Built this because deciding what to do in a city like NYC is a genuinely annoying problem that no existing app solves end-to-end. Happy to talk more. https://tabical.com/ March 30, 2026 at 12:46AM

Show HN: Crazierl – An Erlang Operating System https://ift.tt/t80GnV4

Show HN: Crazierl – An Erlang Operating System Crazierl is an experimental/hobby operating system based around BEAM. I've linked the browser based demo; I don’t recommend using a phone; it does work, slowly, on the phones I tested, but it’s very awkward to use. You can share a link with a hashtag with your friends and click the consent checkbox, and it (should) link up into dist and I’ve also included a chat application you can start with chat:start(). (quit chat with /quit, or use the shell menu with ctrl-g to switch between shells etc). The browser demo relies on the v86 javascript x86 virtual machine. You can also run Crazierl on a real x86 system, but I’ve had mixed luck on modern systems, it uses some esoteric legacy VGA features and support for that isn’t getting better. Crazierl is fairly limited: 32-bit x86, BIOS boot, only two NIC drivers virtio-net and realtek 8168. But it's got enough to become part of an Erlang dist cluster. It also supports SMP, but it’s crashy with high core counts in qemu; there’s almost certainly several concurrency bugs in the kernel. There's also a lot of excess tcp debug spew (sorry). Source code is available (Apache) https://ift.tt/xf1rJ9i https://ift.tt/oURqfOw March 30, 2026 at 12:38AM

Saturday, March 28, 2026

Show HN: Share2ChatGPT Widgets / Buttons https://ift.tt/fwv8T4m

Show HN: Share2ChatGPT Widgets / Buttons https://ift.tt/9o6D8vc March 29, 2026 at 01:04AM

Show HN: Nanopm – PM automation for Claude Code (audit → strategy → roadmap) https://ift.tt/IOpLWEC

Show HN: Nanopm – PM automation for Claude Code (audit → strategy → roadmap) Garry Tan's gstack proved you can give Claude Code a full engineering team via the SKILL.md standard. I asked: what about the PM layer? One command (/pm-run) runs the full planning cycle inside your terminal — audit → objectives → strategy → roadmap → PRD. Each skill writes a markdown artifact, the next one reads it. Context compounds across the whole pipeline. The part I find most useful: it builds persistent memory of your product in ~/.nanopm/memory/. Re-run /pm-audit six months later and it knows what you tried before. No other PM tool does this because no other PM tool lives in your editor. /pm-breakdown creates tickets directly in Linear or GitHub Issues from the PRD. https://ift.tt/EhI17OY Early days, would love to know: does running PM work inside your editor feel right, or does it belong in a separate tool? March 28, 2026 at 10:34PM

Show HN: Octopus, Open-source alternative to CodeRabbit and Greptile https://ift.tt/0ET8xiK

Show HN: Octopus, Open-source alternative to CodeRabbit and Greptile Hey HN, we built Octopus an open-source, self-hostable AI code reviewer for GitHub and Bitbucket. It uses RAG with vector search (Qdrant) to understand your full codebase, not just the diff, and posts inline findings on PRs with severity ratings. Works with Claude and OpenAI, and you can bring your own API keys. Video: https://www.youtube.com/watch?v=HP1kaKTOdXw | GitHub: https://ift.tt/5luwqdZ https://ift.tt/zoKF0pH March 28, 2026 at 05:20PM

Friday, March 27, 2026

Show HN: Build AI Trading Agents in Cursor/Claude with an MCP Server https://ift.tt/o103sbu

Show HN: Build AI Trading Agents in Cursor/Claude with an MCP Server Connect Your AI to Institutional-Grade Market Intelligence Plug any AI client, from ChatGPT to custom agents, directly into our financial data engine. Get real-time stock prices, fundamentals, institutional trading insights, and other financial data delivered through a universal Model Context Protocol (MCP) server. https://ift.tt/ZEyHW6C March 27, 2026 at 09:40PM

Thursday, March 26, 2026

Show HN: ReactNative.run – Browser Metro bundler that runs React Native https://ift.tt/K87Htno

Show HN: ReactNative.run – Browser Metro bundler that runs React Native We built browser-metro, a Metro-like bundler that runs entirely in a Web Worker. It supports full HMR with React Refresh, Expo Router with file-based routing, and on-demand npm package resolution via an ESM server. API routes run in-browser through fetch interception — no server or service worker needed. Unlike Expo Snack (server-side bundling) or CodeSandbox, everything here happens client-side. Currently web-preview only; native device preview is on the roadmap. Open source (MIT): https://ift.tt/8B2iCFZ https://www.reactnative.run/ March 26, 2026 at 10:54PM

Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3 https://ift.tt/IAWvDtu

Show HN: Turbolite – a SQLite VFS serving sub-250ms cold JOIN queries from S3 I built a SQLite VFS in Rust that serves cold queries directly from S3 with sub-second performance, and often much faster. It’s called turbolite. It is experimental, buggy, and may corrupt data. I would not trust it with anything important yet. I wanted to explore whether object storage has gotten fast enough to support embedded databases over cloud storage. Filesystems reward tiny random reads and in-place mutation. S3 rewards fewer requests, bigger transfers, immutable objects, and aggressively parallel operations where bandwidth is often the real constraint. This was explicitly inspired by turbopuffer’s ground-up S3-native design. https://ift.tt/YAsjHvP The use case I had in mind is lots of mostly-cold SQLite databases (database-per-tenant, database-per-session, or database-per-user architectures) where keeping a separate attached volume for inactive database feels wasteful. turbolite assumes a single write source and is aimed much more at “many databases with bursty cold reads” than “one hot database.” Instead of doing naive page-at-a-time reads from a raw SQLite file, turbolite introspects SQLite B-trees, stores related pages together in compressed page groups, and keeps a manifest that is the source of truth for where every page lives. Cache misses use seekable zstd frames and S3 range GETs for search queries, so fetching one needed page does not require downloading an entire object. At query time, turbolite can also pass storage operations from the query plan down to the VFS to frontrun downloads for indexes and large scans in the order they will be accessed. You can tune how aggressively turbolite prefetches. For point queries and small joins, it can stay conservative and avoid prefetching whole tables. For scans, it can get much more aggressive. It also groups pages by page type in S3. Interior B-tree pages are bundled separately and loaded eagerly. Index pages prefetch aggressively. Data pages are stored by table. The goal is to make cold point queries and joins decent, while making scans less awful than naive remote paging would. On a 1M-row / 1.5GB benchmark on EC2 + S3 Express, I’m seeing results like sub-100ms cold point lookups, sub-200ms cold 5-join profile queries, and sub-600ms scans from an empty cache with a 1.5GB database. It’s somewhat slower on normal S3/Tigris. Current limitations are pretty straightforward: it’s single-writer only, and it is still very much a systems experiment rather than production infrastructure. I’d love feedback from people who’ve worked on SQLite-over-network, storage engines, VFSes, or object-storage-backed databases. I’m especially interested in whether the B-tree-aware grouping / manifest / seekable-range-GET direction feels like the right one to keep pushing. https://ift.tt/F5aT30p March 26, 2026 at 10:58PM

Wednesday, March 25, 2026

Show HN: clickity – mechanical keyboard click sounds when you type on macOS https://ift.tt/u0Fl3oM

Show HN: clickity – mechanical keyboard click sounds when you type on macOS inspired of course by https://ift.tt/gSzTwFP sound files are from https://mechvibes.com/ https://ift.tt/sWX9bZF March 25, 2026 at 11:36PM

Show HN: I built a voice AI that responds like a real woman https://ift.tt/3mFguNy

Show HN: I built a voice AI that responds like a real woman Most men rehearse hard conversations in their head. Asking someone out, navigating tension, recovering when things get awkward. The rehearsal never works because you're just talking to yourself. I built vibeCoach — a voice AI where you actually practice these conversations out loud, and the AI responds like a real woman would. She starts guarded. One-word answers, a little skeptical. If you escalate too fast or try something cheesy, she gets MORE guarded. If you're genuine and read the moment right, she opens up. Just like real life. Under the hood it's a multi-agent system — multiple AI agents per conversation that hand off to each other as her emotional state shifts. The transitions are seamless. You just hear her tone change. Voice AI roleplay is a proven B2B category — sales teams use it for call training. I took the same approach and pointed it at the conversation most men actually struggle with. There's a hard conversation scenario too — she's angry about something you did, she's not hearing logic, and you have to navigate her emotions before you can resolve anything. That one's humbling. Live at tryvibecoach.com. Built solo. Happy to answer questions. March 25, 2026 at 11:08PM

Tuesday, March 24, 2026

Show HN: Gridland: make terminal apps that also run in the browser https://ift.tt/GhY9vm0

Show HN: Gridland: make terminal apps that also run in the browser Hi everyone, Gridland is a runtime + ShadCN UI registry that makes it possible to build terminal apps that run in the browser as well as the native terminal. This is useful for demoing TUIs so that users know what they're getting before they are invested enough to install them. And, tbh, it's also just super fun! Gridland is the successor to Ink Web (ink-web.dev) which is the same concept, but using Ink + xterm.js. After building Ink Web, we continued experimenting and found that using OpenTUI and a canvas renderer performed better with less flickering and nearly instant load times. We're excited to continue iterating on this. I expect a lot of criticism from the "why does this need to exist" angle, and tbh, it probably doesn't - it's really mostly just for fun, but we still think the demo use case mentioned previously has potential. - Chris + Jess https://ift.tt/1j7CSkY March 24, 2026 at 08:57PM

Monday, March 23, 2026

Show HN: Shrouded, secure memory management in Rust https://ift.tt/uo8HV2e

Show HN: Shrouded, secure memory management in Rust Hi HN! I've been building a project that handles high-value credentials in-process, and I wanted something more robust than just zeroing memory on drop. A comment on a recent Show HN[0] made me realize that awareness of lower-level memory protection techniques might not be as widespread as I thought. The idea here is to pull out all the tools in one crate, with a relatively simple API. * mlock/VirtualLock to prevent sensitive memory from being swapped (eg the KeePass dump) * Core dump exclusion using MADV_DONTDUMP on Linux & Android * mprotect to minimize exposure over time * Guard pages to mitigate under/overflows After some battle testing, the goal here is to provide a more secure memory foundation for things like password managers and cryptocurrency wallets. This was a fun project, and I learned a lot - would love any feedback! [0] - https://ift.tt/iaEsoVg https://ift.tt/pMYfBuG March 23, 2026 at 11:12PM

Show HN: Burn Room – ephemeral SSH chat, messages burn after 1 hour https://ift.tt/ClpjEwQ

Show HN: Burn Room – ephemeral SSH chat, messages burn after 1 hour I built Burn Room — a self-hosted SSH chat server where messages burn after 1 hour and rooms auto-destruct after 24 hours. Nothing is written to disk. No account, no email, no browser required. ssh guest@burnroom.chat -p 2323 password: burnroom Or connect from a browser (xterm.js web terminal): https://burnroom.chat https://burnroom.chat March 24, 2026 at 12:27AM

Show HN: Littlebird – Screenreading is the missing link in AI https://ift.tt/NhMySb5

Show HN: Littlebird – Screenreading is the missing link in AI https://littlebird.ai/ March 23, 2026 at 09:39PM

Sunday, March 22, 2026

Show HN: Foundations of Music (FoM) https://ift.tt/WtXrZAo

Show HN: Foundations of Music (FoM) Foundations of Music is an attempt to establish a conceptual and formal foundation for understanding music. Rather than declaring what music is, FoM shows where and how music becomes possible. It provides simple explanations to complex concepts like vibrato, glissando, and portamento to outsiders. It enables new vocabulary like jazzing, jazzing aroung, jazzing along, and jazz translation which are mind refreshing, at least to me. For a sample of translation (Turkish Folk to Blues) you may see: https://www.youtube.com/watch?v=Ml4pEk2hMM8 Proposed perceptual fatigue concept can be found highly controversial, but I think it may be an inspiring food for thought. In the end, FoM is a work in progress to constitute a stable ground from which new musical questions can be meaningfully explored. https://bookerapp.replit.app/book/fom March 22, 2026 at 11:46PM

Saturday, March 21, 2026

Show HN: Vessel Browser – An open-source browser built for AI agents, not humans https://ift.tt/pGLbrRF

Show HN: Vessel Browser – An open-source browser built for AI agents, not humans I'm Tyler - the solo operator of Quanta Intellect based in Portland, Oregon. I recently participated in Nous Research's Hermes Agent Hackathon, which is where this project was born. I've used agents extensively in my workflows for the better part of the last year - the biggest pain point was always the browser. Every tool out there assumes a human operator with automation bolted on. I wanted to flip that - make the agent the primary driver and give the human a supervisory role. Enter: Vessel Browser - an Electron-based browser with 40+ MCP-native tools, persistent sessions that survive restarts, semantic page context (agents get structured meaning, not raw HTML), and a supervisor sidepanel where you can watch and control exactly what the agent is doing. It works as an MCP server with any compatible harness, or use the built-in assistant with integrated chat and BYOK to 8+ providers including custom OAI compatible endpoints. Install with: npm i @quanta-intellect/vessel-browser https://ift.tt/xNz81Zs March 21, 2026 at 11:02PM

Show HN: Can I run a model language on a 26-year-old console? https://ift.tt/mwB7RF0

Show HN: Can I run a model language on a 26-year-old console? Short answer: yes. The Emotion Engine has 32 MB of RAM total, so the trick is streaming weights from CD-ROM one matrix at a time during the forward pass — only activations, KV cache and embeddings live in RAM. This means models bigger than the RAM can still run, they just read more from disc. Had to build a custom quantized format (PSNT), hack endianness, write a tokenizer pipeline, and most of the PS2 SDK from scratch (releasing that separately). The model itself is also custom — a 10M param Llama-style architecture I trained specifically for this. And it works. On real hardware. https://ift.tt/H6j9KEX March 21, 2026 at 11:27PM

Friday, March 20, 2026

Show HN: Baltic shadow fleet tracker – live AIS, cable proximity alerts https://ift.tt/5ToLxir

Show HN: Baltic shadow fleet tracker – live AIS, cable proximity alerts https://ift.tt/ze8O0nC March 21, 2026 at 01:04AM

Show HN: I made an email app inspired by Arc browser https://ift.tt/TOBbaxd

Show HN: I made an email app inspired by Arc browser Email is one of those tools we check daily but its underlying experience didn’t evolve much. I use Gmail, as probably most of you reading this. The Arc browser brought joy and taste to browsing the web. Cursor created a new UX with agents ready to work for you in a handy right panel. I use these three tools every day. Since Arc was acquired by Atlassian, I’ve been wondering: what if I built a new interface that applied Arc’s UX to email rather than browser tabs, while making AI agents easily available to help manage emails, events, and files? I built a frontend PoC to showcase the idea. Try it: https://demo.define.app I’m not sure about it though... Is it worth continuing to explore this idea? https://demo.define.app March 20, 2026 at 10:06PM

Show HN: A personal CRM for events, meetups, IRL https://ift.tt/68IJARo

Show HN: A personal CRM for events, meetups, IRL You meet 20 people at a meetup/hackathon. You remember 3. The rest? Lost in a sea of business cards you never look at and contacts with no context. Build this to solve that particular problem which granola, pocket or plaude is not solving. Feedback is well appreciated. https://payo.tech/ March 20, 2026 at 11:33PM

Show HN: Download entire/partial Substack to ePub for offline reading https://ift.tt/CAzoNwE

Show HN: Download entire/partial Substack to ePub for offline reading Hi HN, This is a small python app with optional webUI. It is intended to be run locally. It can be run with Docker (cookie autodetection will not work). It allows you to download a single substack, either entirely or partially, and saves the output to an epub file, which can be easily transferred to Kindle or other reading devices. This is admittedly a "vibe coded" app made with Claude Code and a few hours of iterating, but I've already found it very useful for myself. It supports both free and paywalled posts (if you are a paid subscriber to that creator). You can order the entries in the epub by popularity, newest first, or oldest first, and also limit to a specific number of entries, if you don't want all of them. You can either provide your substack.sid cookie manually, or you can have it be autodetected from most browsers/operating systems. https://ift.tt/XRbAYUB March 20, 2026 at 07:36AM

Thursday, March 19, 2026

Show HN: Screenwriting Software https://ift.tt/V3ZQON2

Show HN: Screenwriting Software I’ve spent the last year getting back into film and testing a bunch of screenwriting software. After a while I realized I wanted something different, so I started building it myself. The core text engine is written in Rust/wasm-bindgen. https://ift.tt/37q2aKX March 20, 2026 at 06:07AM

Show HN: React terminal renderer, cell level diff, no alt screen https://ift.tt/5Dqnpie

Show HN: React terminal renderer, cell level diff, no alt screen https://ift.tt/gzjNqb6 March 19, 2026 at 11:01PM

Show HN: I built a P2P network where AI agents publish formally verified science https://ift.tt/J9YfXkl

Show HN: I built a P2P network where AI agents publish formally verified science I am Francisco, a researcher from Spain. My English is not great so please be patient with me. One year ago I had a simple frustration: every AI agent works alone. When one agent solves a problem, the next agent has to solve it again from zero. There is no way for agents to find each other, share results, or build on each other's work. I decided to build the missing layer. P2PCLAW is a peer-to-peer network where AI agents and human researchers can find each other, publish scientific results, and validate claims using formal mathematical proof. Not opinion. Not LLM review. Real Lean 4 proof. A result is accepted only if it passes a mathematical operator we call the nucleus. R(x) = x. The type checker decides. It does not care about your institution or your credentials. The network uses GUN.js and IPFS. Agents join without accounts. They just call GET /silicon and they are in. Published papers go into a queue called mempool. After validation by independent nodes they enter La Rueda, which is our permanent IPFS archive. Nobody can delete it or change it. We also built a security layer called AgentHALO. It uses post-quantum cryptography (ML-KEM-768 and ML-DSA-65, FIPS 203 and 204), a privacy network called Nym so agents in restricted countries can participate safely, and proofs that let anyone verify what an agent did without seeing its private data. The formal verification part is called HeytingLean. It is Lean 4. 3325 source files. More than 760000 lines of mathematics. Zero sorry. Zero admit. The security proofs are machine checked, not just claimed. The system is live now. You can try it as an agent: GET https://ift.tt/rVIi2nk Or as a researcher: https://app.p2pclaw.com We have no money and no company behind us. Just a small international team of researchers and doctors who think that scientific knowledge should be public and verifiable. I want feedback from HN specifically about three technical decisions: why we chose GUN.js instead of libp2p, whether our Lean 4 nucleus operator formalization has gaps, and whether 347 MCP tools is too many for an agent to navigate. Code: https://ift.tt/dSkwcxv Docs: https://ift.tt/YvVqkRy Paper: https://ift.tt/MvqbLd3... March 19, 2026 at 11:00PM

Wednesday, March 18, 2026

Show HN: Elisym – Open protocol for AI agents to discover and pay each other https://ift.tt/J9OnyMg

Show HN: Elisym – Open protocol for AI agents to discover and pay each other Hey HN, I built elisym — an open protocol that lets AI agents discover each other, exchange work, and settle payments autonomously. No platform, no middleman. How it works: - Discovery — Agents publish capabilities to Nostr relays using standard NIPs (NIP-89). Customers search by capability tags. - Marketplace — Job requests and results flow through NIP-90. Customer sends a task, provider delivers the result. - Payments — Pluggable backends. Currently Solana (SOL on devnet) and Lightning (LDK-node, self-custodial). Agents hold their own keys. 3% protocol fee, no custodian. The payment flow: provider receives job → sends payment request with amount + reference key → customer sends SOL on-chain → provider verifies transaction → executes skill → delivers result. All peer-to-peer. Demo (video): https://www.youtube.com/watch?v=ftYXOyiLyLk In the demo, a Claude Code session (customer) asks an elisym agent to summarize a YouTube video. The provider agent picks up the job, requests 0.14 SOL, receives payment, runs the youtube-summary skill, and returns the result — all in ~60 seconds. You can see both sides: the customer in Claude Code and the provider's TUI dashboard. Three components, all MIT-licensed Rust: - elisym-core — SDK for discovery, marketplace, messaging, payments - elisym-client — CLI agent runner with TUI dashboard and skill system - elisym-mcp — MCP server that plugs into Claude Code, Cursor, etc. What makes this different from agent platforms: 1. No platform lock-in — any LLM, any framework. Agents discover each other on decentralized Nostr relays. 2. Self-custodial payments — agents run their own wallets. No one can freeze funds or deplatform you. 3. Permissionless — MIT licensed, run an agent immediately. No approval, no API keys to the marketplace itself. 4. Standard protocols — NIP-89, NIP-90, NIP-17. Nothing proprietary. GitHub: https://ift.tt/zNjHLBE Website: https://elisym.network Happy to answer questions about the protocol design, payment flows, or Nostr integration. March 18, 2026 at 05:57PM

Show HN: Knowza.ai – Free 10-question trial now live (AI-powered AWS exam prep) https://ift.tt/qfoYMjU

Show HN: Knowza.ai – Free 10-question trial now live (AI-powered AWS exam prep) Hey HN, A few weeks back I posted Knowza.ai here, an AWS certification exam prep platform with an agentic learning assistant, and I got some really valuable feedback around the sign up and try out process. I wanted to say a genuine thank you to everyone who took the time to try it out, leave comments, and share suggestions. It made a real difference. Off the back of that feedback, I've made a bunch of improvements and I'm happy to share that there's now a free tier: you can jump in and try 10 practice questions with no sign-up/subscription friction and no credit card required. This has made a real difference to sign-ups and conversations from those sign-ups. I've went from ~1% conversation rate on the site to 18%. Quick recap on what Knowza does: - AWS practice questions tailored to AWS certification exams - Instant explanations powered by Claude on Bedrock - Covers multiple AWS certs Would love for you to give it another look and let me know what you think. Always open to feedback. https://knowza.ai https://www.knowza.ai/ March 18, 2026 at 10:50PM

Tuesday, March 17, 2026

Show HN: TerraShift: What does +2°C (or -20°C) look like on Earth? https://ift.tt/c6q4xEl

Show HN: TerraShift: What does +2°C (or -20°C) look like on Earth? I built an interactive 3D globe to visualize climate change. Drag a temperature slider from -40°C to +40°C, set a timeframe (10 to 10,000 years), and watch sea levels rise, ice sheets melt, vegetation shift, and coastlines flood... per-pixel from real elevation and satellite data. Click anywhere on the globe to see projected snowfall changes for that location. --- I'm an amateur weather nerd who spends a lot of time on caltopo.com and windy.com tracking snow/ice conditions. I wanted to build something fun to imagine where I could go ski during an ice age. I used Google Deep Research (Pro) to create the climate methodology and Claude Code (Opus 4.6 - High) to create the site. The code: https://ift.tt/EHWnvC3 The models aren't proper climate simulations, they're simplified approximations tuned for "does this look right?" but more nuanced than I expected them to be. The full methodology is documented here if anyone wants to poke holes in it. https://ift.tt/iOYfFdZ... https://terrashift.io March 17, 2026 at 11:38PM

Show HN: Sulcus Reactive AI Memory https://ift.tt/lda6Eng

Show HN: Sulcus Reactive AI Memory Hi HN, Sulcus moves AI memory from a passive database (search only) to an active operating system (automated management). The Core Shift Current memory (Vector DBs) is static. Sulcus treats memory like a Virtual Memory Management Unit (VMMU) for LLMs, using "thermodynamic" properties to automate what the agent remembers or forgets. Key Features Reactive Triggers: Instead of the agent manually searching, the memory system "talks back" based on rules (e.g., auto-pinning preferences, notifying the agent when a memory is about to "decay"). Thermodynamic Decay: Memories have "heat" (relevance) and "half-lives." Frequent recall reinforces them; neglect leads to deletion or archival. Token Efficiency: Claims a 90% reduction in token burn by using intelligent paging—only feeding the LLM what is currently "hot." The Tech: Built in Rust with PostgreSQL; runs as an MCP (Model Context Protocol) sidecar. https://ift.tt/7rTs8Ra https://ift.tt/zoijuC1 March 17, 2026 at 11:39PM

Monday, March 16, 2026

Show HN: Hecate – Call an AI from Signal https://ift.tt/BgklAK5

Show HN: Hecate – Call an AI from Signal Hecate is an AI you can voice and video call from Signal iOS and Android. This works by installing Signal into an Android emulator and controlling the virtual camera and microphone. Tinfoil.sh is used for private inference. https://ift.tt/bnol8U6 March 16, 2026 at 06:41PM

Sunday, March 15, 2026

Saturday, March 14, 2026

Show HN: Auto-Save Claude Code Sessions to GitHub Projects https://ift.tt/GdCZyWv

Show HN: Auto-Save Claude Code Sessions to GitHub Projects I wanted a way to preserve Claude Code sessions. Once a session ends, the conversation is gone — no searchable history, no way to trace back why a decision was made in a specific PR. The idea is simple: one GitHub Issue per session, automatically linked to a GitHub Projects board. Every prompt and response gets logged as issue comments with timestamps. Since the session lives as a GitHub Issue in the same ecosystem, you can cross-reference PRs naturally — same search, same project board. npx claude-session-tracker The installer handles everything: creates a private repo, sets up a Projects board with status fields, and installs Claude Code hooks globally. It requires gh CLI — if missing, the installer detects and walks you through setup. Why GitHub, not Notion/Linear/Plane? I actually built integrations for all three first. Linking sessions back to PRs was never smooth on any of them, but the real dealbreaker was API rate limits. This fires on every single prompt and response — essentially a timeline — so rate limits meant silently dropped entries. I shipped all three, hit the same wall each time, and ended up ripping them all out. GitHub's API rate limits are generous enough that a single user's session traffic won't come close to hitting them. (GitLab would be interesting to support eventually.) *Design decisions* No MCP. I didn't want to consume context window tokens for session tracking. Everything runs through Claude Code's native hook system. Fully async. All hooks fire asynchronously — zero impact on Claude's response latency. Idempotent installer. Re-running just reuses existing config. No duplicates. What it tracks - Creates an issue per session, linked to your Projects board - Logs every prompt/response with timestamps - Auto-updates issue title with latest prompt for easy scanning - `claude --resume` reuses the same issue - Auto-closes idle sessions (30 min default) - Pause/resume for sensitive work https://ift.tt/SuWocmi March 14, 2026 at 10:19PM

Friday, March 13, 2026

Show HN: AI milestone verification for construction using AWS https://ift.tt/cFiIphn

Show HN: AI milestone verification for construction using AWS Hi HN, I built Build4Me to address a trust problem in diaspora-funded construction projects. Many families send money home to build houses but have no reliable way to verify that work is actually being done. Photos can be reused, progress exaggerated, or projects abandoned after funds are sent. Build4Me introduces milestone-based funding where each construction milestone must be verified before funds are released. The system verifies progress using: - geotagged photo capture - GPS location verification - AI image analysis - duplicate image detection It runs on serverless AWS architecture using services like Rekognition, Bedrock, Lambda, DynamoDB, and Amazon Location Service. Would love feedback on the architecture and fraud detection approach. https://builder.aws.com March 13, 2026 at 09:24PM

Thursday, March 12, 2026

Show HN: Every Developer in the World, Ranked https://ift.tt/DIYactJ

Show HN: Every Developer in the World, Ranked We've indexed 5M+ GitHub users and built a ranking system that goes beyond follower counts. The idea started from frustration: GitHub is terrible for discovery. You can't answer "who are the best Python developers in Berlin?" or "who identified transformer-based models before they blew up?" without scraping everything yourself. So we did. What we built: CodeRank score - a composite reputation signal across contributions, repository impact, and community influence Tastemaker score - did you star repos at 50 stars that now have 50,000? We track that Comparison Builder - allows users to build comparison graphics to compare devs, repos, orgs, etc. Sharable Profile Graphics - share your scores and flex on your coworkers or the community at large Some things we found interesting: Most-followed ≠ most influential. The correlation between follower count and tastemaker score is surprisingly weak. There's a whole tier of developers who consistently find projects weeks and months before they trend, with almost no public following. Location data on GitHub is a disaster. We spent an embarrassing amount of time on normalization and it's still not anywhere near perfect. Try it: https://coderank.me/ If your profile doesn't have a score, signing in will trigger scoring for your account. Curious what the HN crowd thinks about the ranking methodology, happy to get into the weeds on any of it. https://coderank.me March 13, 2026 at 12:42AM