Saturday, February 28, 2026
Show HN: Free, open-source native macOS client for di.fm https://ift.tt/kjAPurQ
Show HN: Free, open-source native macOS client for di.fm I built a menu bar app for streaming DI.FM internet radio on macOS. Swift/SwiftUI, no Electron. The existing options for DI.FM on desktop are either the web player (yet another browser tab) or unofficial Electron wrappers that idle at 200+ MB of RAM to play an audio stream. This sits in the menu bar at ~35 MB RAM and 0% CPU. The .app is about 1 MB. What it does: browse and search stations, play/pause, volume, see what's playing (artwork, artist, track, time), pick stream quality (320k MP3, 128k AAC, 64k AAC). Media keys work. It remembers your last station. Built with AVPlayer for streaming, MenuBarExtra for the UI, MPRemoteCommandCenter for media key integration. The trickiest part was getting accurate elapsed time. DI.FM's API and the ICY stream metadata don't always agree, so there's a small state machine that reconciles the two sources. macOS 14+ required. You need a DI.FM premium account for the high-quality streams. Source and binary: https://ift.tt/Uvj2gXV https://ift.tt/Uvj2gXV March 1, 2026 at 02:21AM
Show HN: Monohub – a new GitHub alternative / code hosting service https://ift.tt/C9fFxMw
Show HN: Monohub – a new GitHub alternative / code hosting service Hello everyone, My name is Teymur Bayramov, and I am developing a forge/code hosting service called Monohub. It is at a fairly early stage of development, so it's quite rough around the edges. It is developed and hosted in EU. I have started developing it as a slim wrapper around Git to serve my own code, but it grew to such extent that I decided to give it a try and offer it as a service. It doesn't have much at the moment, but it already has basic pull requests. Accessibility is high priority. It will be a paid service, but since it's an early start, an "early adopter discount" is applied – 6 months for free. No card details required. I would be happy if you give it a try and let me know what do you think, and perhaps share what you lack in existing solutions that you would like to see implemented here. Warmest wishes, Teymur. https://monohub.dev/ February 28, 2026 at 11:13PM
Friday, February 27, 2026
Show HN: Claude-File-Recovery, recover files from your ~/.claude sessions https://ift.tt/E92tTcp
Show HN: Claude-File-Recovery, recover files from your ~/.claude sessions Claude Code deleted my research and plan markdown files and informed me: “I accidentally rm -rf'd real directories in my Obsidian vault through a symlink it didn't realize was there: I made a mistake. “ Unfortunately the backup of my documentation accidentally hadn’t run for a month. So I built claude-file-recovery, a CLI-tool and TUI that is able to extract your files from your ~/.claude session history and thankfully I was able to recover my files. It's able to extract any file that Claude Code ever read, edited or wrote. I hope you will never need it, but you can find it on my GitHub and pip. Note: It can recover an earlier version of a file at a certain point in time. pip install claude-file-recovery https://ift.tt/DFQ1rhm February 27, 2026 at 08:26PM
Show HN: Interactive Resume/CV Game https://ift.tt/06J9LXG
Show HN: Interactive Resume/CV Game https://breezko.dev February 27, 2026 at 11:21PM
Thursday, February 26, 2026
Show HN: Safari-CLI – Control Safari without an MCP https://ift.tt/EWZTlSa
Show HN: Safari-CLI – Control Safari without an MCP Hello HN! I built this tool to help my agentic software development (vibe coding) workflow. I wanted to debug Safari specific frontend bugs using copilot CLI, however MCP servers are disabled in my organisation. Therefore I built this CLI tool to give the LLM agent control over the browser. Hope you'll find it useful! https://ift.tt/iA1d5by February 27, 2026 at 01:18AM
Show HN: I stopped building apps for people. Now I make CLI tools for agents https://ift.tt/agQ5tTv
Show HN: I stopped building apps for people. Now I make CLI tools for agents https://ift.tt/aWxu9U1 February 26, 2026 at 11:14PM
Wednesday, February 25, 2026
Show HN: Linex – A daily challenge: placing pieces on a board that fights back https://ift.tt/5jQhTcz
Show HN: Linex – A daily challenge: placing pieces on a board that fights back Hi HN, I wanted to share a web game I’ve been building in HTML, JavaScript, MySQL, and PHP called LINEX. It is primarily designed and optimized to be played in the mobile browser. The idea is simple: you have an 8x8 board where you must place pieces (Tetris-style and some custom shapes) to clear horizontal and vertical lines. Yes, someone might think this has already been done, but let me explain. You choose where to place the piece and how to rotate it. The core interaction consists of "drawing" the piece tap-by-tap on the grid, which provides a very satisfying tactile sense of control and requires a much more thoughtful strategy. To avoid the flat difficulty curve typical of games in this genre, I’ve implemented a couple of twists: 1. Progressive difficulty (The board fights back): As you progress and clear lines, permanently blocked cells randomly appear on the board. This forces you to constantly adapt your spatial vision. 2. Tools to defend yourself: To counter frustration, you have a very limited number of aids (skip the piece, choose another one, or use a special 1x1 piece). These resources increase slightly as the board fills up with blocked cells, forcing you to decide the exact right moment to use them. The game features a daily challenge driven by a date-based random seed (PRNG). Everyone gets exactly the same sequence of pieces and blockers. Furthermore, the base difficulty scales throughout the week: on Mondays you start with a clean board (0 initial blocked cells, although several will appear as the game progresses), and the difficulty ramps up until Sunday, where you start the game with 3 obstacles already in place. In addition to the global medal leaderboard, you can add other users to your profile to create a private leaderboard and compete head-to-head just with your friends. Time is also an important factor, as in the event of a tie in cleared lines, the player who completed them faster will rank higher on the leaderboard. I would love for you to check it out. I'm especially looking for honest feedback on the difficulty curve, the piece-placement interaction (UI/UX), or the balancing of obstacles/tools, although any other ideas, critiques, or suggestions are welcome. https://ift.tt/LPkDXhv Thanks! https://ift.tt/LPkDXhv February 25, 2026 at 03:33AM
Show HN: I ported Manim to TypeScript (run 3b1B math animations in the browser) https://ift.tt/yua30AI
Show HN: I ported Manim to TypeScript (run 3b1B math animations in the browser) Hi HN, I'm Narek. I built Manim-Web, a TypeScript/JavaScript port of 3Blue1Brown’s popular Manim math animation engine. The Problem: Like many here, I love Manim's visual style. But setting it up locally is notoriously painful - it requires Python, FFmpeg, Cairo, and a full LaTeX distribution. It creates a massive barrier to entry, especially for students or people who just want to quickly visualize a concept. The Solution: I wanted to make it zero-setup, so I ported the engine to TypeScript. Manim-Web runs entirely client-side in the browser. No Python, no servers, no install. It runs animations in real-time at 60fps. How it works underneath: - Rendering: Uses Canvas API / WebGL (via Three.js for 3D scenes). - LaTeX: Rendered and animated via MathJax/KaTeX (no LaTeX install needed!). - API: I kept the API almost identical to the Python version (e.g., scene.play(new Transform(square, circle))), meaning existing Manim knowledge transfers over directly. - Reactivity: Updaters and ValueTrackers follow the exact same reactive pattern as the Python original. Because it's web-native, the animations are now inherently interactive (objects can be draggable/clickable) and can be embedded directly into React/Vue apps, interactive textbooks, or blogs. I also included a py2ts converter to help migrate existing scripts. Live Demo: https://maloyan.github.io/manim-web/examples GitHub: https://ift.tt/TQPNv1t It's open-source (MIT). I'm still actively building out feature parity with the Python version, but core animations, geometry, plotting, and 3D orbiting are working great. I would love to hear your feedback, and I'll be hanging around to answer any technical questions about rendering math in the browser! https://ift.tt/TQPNv1t February 25, 2026 at 10:15PM
Tuesday, February 24, 2026
Show HN: Open-Weight Image-Video VAE (Better Reconstruction ≠ Better Generation) https://ift.tt/bFNOw8B
Show HN: Open-Weight Image-Video VAE (Better Reconstruction ≠ Better Generation) https://ift.tt/vJnzrSH February 24, 2026 at 10:59PM
Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP) https://ift.tt/1xiGIsj
Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP) It takes an input video and converts it into H.264/Opus RTP streams that you can blast at your video call systems (WebRTC, SFUs, etc.). It also injects network chaos like packet loss, jitter, and bitrate throttling to see how things break It scales from 1 to n participants, depending on the compute and memory of the host system Best part? It’s packaged with Nix, so it builds the same everywhere (Linux, macOS, ARM, x86). No dependency hell It supports both UDP (with a relay chain for Kubernetes) and WebRTC (with containerized TURN servers). Chaos spikes can be distributed evenly, randomly, or front/back-loaded for different test scenarios. To change this, just edit the values in a single config file https://ift.tt/MNteC4q February 23, 2026 at 12:53PM
Monday, February 23, 2026
Show HN: I vibe-coded a custom WebGPU engine for my MMO https://ift.tt/y48hEzM
Show HN: I vibe-coded a custom WebGPU engine for my MMO It took me about a week to vibe code this 3D game engine with Opus 4.6 that I intend to use as a replacement for Three.js and React Three Fiber in my browser MMORPG, Mana Blade. I was not expecting to be able to reach that point so easily, but pretty much every feature took somewhere between 30 minutes and 1 hour - 1 to 3 prompts on average. It is vibe-coded in the sense that I haven't looked at the code, but I am very careful with my prompts and constantly have Claude reviewing the codebase, looking for performance and code quality improvements. It can reach 2000 draw calls on recent integrated GPUs, such as modern phones or MacBooks, where Three.js usually starts dropping frames at 300-600 draw calls. I love Three.js, but I wanted to build something more minimal that does exactly what I need with better performance. I started with a C/WASM core but ended up sticking with JS because the performance difference wasn't significant enough for the number of entities in my game (never more than 500 entities). All in all, it was a fascinating experience, and I learned a lot about engines, even without typing a single line of code. It's pretty wild that we can now quite easily build in-house engines alongside our games as solo developers. https://ift.tt/ziIckJH February 23, 2026 at 10:30PM
Show HN: Unlock the best engineering knowledge in papers for your coding agent https://ift.tt/OgZXV5c
Show HN: Unlock the best engineering knowledge in papers for your coding agent https://ift.tt/gPlTa2c February 23, 2026 at 09:33PM
Sunday, February 22, 2026
Show HN: Mujoco React https://ift.tt/j3yftRb
Show HN: Mujoco React MuJoCo physics simulation in the browser using React. This is made possible by DeepMind's mujoco-wasm (mujoco-js), which compiles MuJoCo to WebAssembly. We wrap it with React Three Fiber so you can load any MuJoCo model, step physics, and write controllers as React components, all running client-side in the browser https://ift.tt/JOoVQl6 February 22, 2026 at 10:29PM
Saturday, February 21, 2026
Show HN: DevBind – I made a Rust tool for zero-config local HTTPS and DNS https://ift.tt/HYfqwP5
Show HN: DevBind – I made a Rust tool for zero-config local HTTPS and DNS Hey HN, I got tired of messing with /etc/hosts and browser SSL warnings every time I started a new project. So I wrote DevBind. It's a small reverse proxy in Rust. It basically does two things: 1. Runs a tiny DNS server so anything.test just works instantly (no more manual hosts file edits). 2. Sits on port 443 and auto-signs SSL certs on the fly so you get the nice green lock in Chrome/Firefox. It's been built mostly for Linux (it hooks into systemd-resolved), but I've added some experimental bits for Mac/Win too. Still a work in progress, but I've been using it for my own dev work and it's saved me a ton of time. Would love to know if it breaks for you or if there's a better way to handle the networking bits! https://ift.tt/ni7cqL9 February 22, 2026 at 12:19AM
Show HN: Winslop – De-Slop Windows https://ift.tt/XWoYZED
Show HN: Winslop – De-Slop Windows https://ift.tt/E84fUxd February 21, 2026 at 11:56PM
Friday, February 20, 2026
Show HN: Manifestinx-verify – offline verifier for evidence bundles (drift) https://ift.tt/8nMyfpX
Show HN: Manifestinx-verify – offline verifier for evidence bundles (drift) Manifest-InX EBS is a spec + offline verifier + proof kit for tamper-evident evidence bundles. Non-negotiable alignment: - Live provider calls are nondeterministic. - Determinism begins at CAPTURE (pinned artifacts). - Replay is deterministic offline. - Drift/tamper is deterministically rejected. Try it in typically ~10 minutes (no signup): 1) Run the verifier against the included golden bundle → PASS 2) Tamper an artifact without updating hashes → deterministic drift/tamper rejection Repo: https://ift.tt/Oqy43nN Skeptic check: docs/ebs/PROOF_KIT/10_MINUTE_SKEPTIC_CHECK.md Exit codes: 0=OK, 2=DRIFT/TAMPER, 1=INVALID/ERROR Boundaries: - This repo ships verifier/spec/proof kit only. The Evidence Gateway (capture/emission runtime) is intentionally not included. - This is not a “model correctness / no hallucinations” claim—this is evidence integrity + deterministic replay/verification from pinned artifacts. Looking for feedback: - Does the exit-code model map cleanly to CI gate usage? - Any spec/report format rough edges that block adoption? https://ift.tt/Oqy43nN February 20, 2026 at 10:27PM
Thursday, February 19, 2026
Show HN: A small, simple music theory library in C99 https://ift.tt/PT6Il0x
Show HN: A small, simple music theory library in C99 https://ift.tt/5Q7IZUd February 20, 2026 at 02:54AM
Show HN: Hi.new – DMs for agents (open-source) https://ift.tt/DeSzuka
Show HN: Hi.new – DMs for agents (open-source) https://www.hi.new/ February 20, 2026 at 01:20AM
Show HN: Astroworld – A universal N-body gravity engine in Python https://ift.tt/VBbLmlK
Show HN: Astroworld – A universal N-body gravity engine in Python I’ve been working on a modular N-body simulator in Python called Astroworld. It started as a Solar System visualizer, but I recently refactored it into a general-purpose engine that decouples physical laws from planetary data.Technical Highlights:Symplectic Integration: Uses a Velocity Verlet integrator to maintain long-term energy conservation ($\Delta E/E \approx 10^{-8}$ in stable systems).Agnostic Architecture: It can ingest any system via orbital elements (Keplerian) or state vectors. I've used it to validate the stability of ultra-compact systems like TRAPPIST-1 and long-period perturbations like the Planet 9 hypothesis.Validation: Includes 90+ physical tests, including Mercury’s relativistic precession using Schwarzschild metric corrections.The Planet 9 Experiment:I ran a 10k-year simulation to track the differential signal in the argument of perihelion ($\omega$) for TNOs like Sedna. The result ($\approx 0.002^{\circ}$) was a great sanity check for the engine’s precision, as this effect is secular and requires millions of years to fully manifest.The Stack:NumPy for vectorization, Matplotlib for 2D analysis, and Plotly for interactive 3D trajectories.I'm currently working on a real-time 3D rendering layer. I’d love to get feedback on the integrator’s stability for high-eccentricity orbits or suggestions on implementing more complex gravitational potentials. https://ift.tt/rTHXu2h February 19, 2026 at 11:57PM
Wednesday, February 18, 2026
Show HN: Sports-skills.sh – sports data connectors for AI agents https://ift.tt/9BFXrzb
Show HN: Sports-skills.sh – sports data connectors for AI agents We built this because every sports AI demo uses fake data or locks you behind an enterprise API contract. sports-skills gives your agent real sports data with one install command. No API keys. No accounts. For personal use. Eight connectors out of the box: NFL, soccer across 13 leagues with xG, Formula 1 lap and pit data, NBA, WNBA, Polymarket, Kalshi, and a sports news aggregator pulling from BBC/ESPN/The Athletic. npx skills add machina-sports/sports-skills Open for contributions. https://ift.tt/19rtecq February 19, 2026 at 12:40AM
Show HN: Keystone – configure Dockerfiles and dev containers for any repo https://ift.tt/cCgmxBo
Show HN: Keystone – configure Dockerfiles and dev containers for any repo We kept hitting the same wall: you clone some arbitrary repo and just want it to run without any configuration work. So we built Keystone, an open source tool that spins up a Modal sandbox, runs Claude Code inside it, and produces a working .devcontainer/ config (Dockerfile, devcontainer.json, test runner) for any git repo. We build on the dev container standard, so the output works with VS Code and GitHub Codespaces out of the box. Main use cases: reproducible dev/CI environments, self-describing repos, and safely sandboxed coding agents. Our goal is to encourage all repos to self-describe their runtime environment. Why the sandbox? Running Claude directly against your Docker daemon is risky. We've watched it clear Docker config and tweak kernel settings when iterating on containers. Containerization matters most when your agent is acting like a sysadmin. To use it: get a Modal account and an Anthropic API key, run Keystone on your repo, check in the .devcontainer/ directory. See the project README for more details. https://ift.tt/xS0qnXc February 18, 2026 at 10:40PM
Tuesday, February 17, 2026
Show HN: Pg-typesafe – Strongly typed queries for PostgreSQL and TypeScript https://ift.tt/CbflIXy
Show HN: Pg-typesafe – Strongly typed queries for PostgreSQL and TypeScript Throughout my career, I tried many tools to query PostgreSQL, and in the end, concluded that for what I do, the simplest is almost always the best: raw SQL queries. Until now, I typed the results manually and relied on tests to catch problems. While this is OK in e.g., GoLang, it is quite annoying in TypeScript. First, because of the more powerful type system (it's easier to guess that updated_at is a date than it is to guess whether it's nullable or not), second, because of idiosyncrasies (INT4s are deserialised as JS numbers, but INT8s are deserialised as strings). So I wrote pg-typesafe, with the goal of it being the less burdensome: you call queries exactly the same way as you would call node-pg, and they are fully typed. It's very new, but I'm already using it in a large-ish project, where it found several bugs and footguns, and also allowed me to remove many manual type definitions. https://ift.tt/CVeQfyB February 17, 2026 at 10:15PM
Show HN: I'm launching a LPFM radio station https://ift.tt/R5OpPL9
Show HN: I'm launching a LPFM radio station I've been working on creating a Low Power FM radio station for the east San Fernando Valley of Los Angeles. We are not yet on the broadcast band but our channel will be 95.9FM and our range can been seen on the homepage of our site. KPBJ is a freeform community radio station. Anyone in the area is encouraged to get a timeslot and become a host. We make no curatorial decisions. Its sort of like public access or a college station in that way. This month we launched our internet stream and on-boarded about 60 shows. They are mostly music but there are a few talk shows. We are restricting all shows to monthly time slots for now but this will change in the near future as everyone gets more familiar with the systems involved. All shows are pre-recorded until we can raise the money to get a studio. We have a site secured for our transmitter but we need to fundraise to cover the equipment and build out costs. We will be broadcasting with 100W ERP from a ridgeline in the Verdugos at about 1500ft elevation. The site will need to be off grid so we will need to install a solar system with battery backup. We are planning to sync the station to the transmit site with 802.11ah. I've built all of our web infrastructure using Haskell, NixOS, Terraform, and HTMX: https://ift.tt/GJEcQA8 This is a pretty substantial project involving a bunch of social and technical challenges and a shoe string budget. I'm feel pretty confident we will pull it off and make it a high impact local radio station. The station is managed by a 501c3 non-profit we created. We are actively seeking fundraising, especially to get our transmit site up and running. If you live in the area or want to contribute in any way then please reach out! https://www.kpbj.fm/ February 18, 2026 at 12:15AM
Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway https://ift.tt/1QKODoN
Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway https://ift.tt/urXiVea February 17, 2026 at 11:24PM
Monday, February 16, 2026
Show HN: Nothing as a Service – Premium nothingness for minimalists https://ift.tt/eSpat4i
Show HN: Nothing as a Service – Premium nothingness for minimalists https://euphonious-blancmange-24c5b0.netlify.app/ February 16, 2026 at 10:01PM
Show HN: Nerve: Stitches all your data sources into one mega-API https://ift.tt/pXhFsIr
Show HN: Nerve: Stitches all your data sources into one mega-API Hi HN! Nerve is a solo project I've been working on for the last few years. It's a developer tool that stitches together data from multiple sources in real-time. A lot of high-leverage projects (AI or otherwise) involve tying data together from multiple systems of record. This is easy enough when the data is simple and the sources are few, but if you have highly nested data and lots of sources (or you need things like federated pagination and filtering), you have to write a lot of gnarly boilerplate that's brittle and easy to get wrong. One solution is to import all your data into a central warehouse and just pull it from there. This works, but 1) you need a warehouse, 2) you have an extra copy of the data that can get stale or inconsistent, 3) you need to write and manage pipelines/connectors (or outsource them to a vendor), and 4) you're adding an extra point of failure. Nerve lets you write GraphQL-style queries that span multiple sources; then it goes out and pulls from whatever source APIs it needs to at query-time - all your source data stays where it is. Nerve has pre-built bindings to external SAAS services, and it's straightforward to hook it into your internal sources as well. Nerve is made for individual developers or two-pizza teams who: -Are building agents/internal tools -Need to deal with messy data strewn across different systems -Don't have a data team/warehouse at their disposal, (or do, but can't get a slice of their bandwidth) -Want to get to production as quickly as possible Everything you see in the demo is shipped and usable, but I'm adding a little polish before I officially launch. In the meantime, if you have a project you'd like to use Nerve on and you want to be a beta user, just drop me a line at mprast@get-nerve.com (it's free! I'll just pop in from time to time to ask you how it's going and what I can improve :) ) If you want to get an email when Nerve is ready from prime-time, you can sign up for the waitlist at get-nerve.com. Thanks for reading! (EDIT: Nerve is desktop only! I'll put up a gate on the site saying as much.) https://ift.tt/Z2NTdv1 February 15, 2026 at 03:07AM
Sunday, February 15, 2026
Show HN: Please hack my C webserver (it's a collaborative whiteboard) https://ift.tt/30Hf1QR
Show HN: Please hack my C webserver (it's a collaborative whiteboard) Source code: https://ift.tt/OFVtG4D https://ced.quest/draw/ February 15, 2026 at 10:57PM
Show HN: VOOG – Moog-style polyphonic synthesizer in Python with tkinter GUI https://ift.tt/b7wpHfy
Show HN: VOOG – Moog-style polyphonic synthesizer in Python with tkinter GUI Body: I built a polyphonic synthesizer in Python with a tkinter GUI styled after the Moog Subsequent 37. Features: 3 oscillators, Moog ladder filter (24dB/oct), dual ADSR envelopes, LFO, glide, noise generator, 4 multitimbral channels, 19 presets, rotary knob GUI, virtual keyboard with mouse + QWERTY input, and MIDI support. No external GUI frameworks — just tkinter, numpy, and sounddevice. https://ift.tt/xJIo60M February 15, 2026 at 11:40PM
Show HN: Microgpt is a GPT you can visualize in the browser https://ift.tt/WZdS6Gr
Show HN: Microgpt is a GPT you can visualize in the browser very much inspired by karpathy's microgpt of the same name. it's (by default) a 4000 param GPT/LLM/NN that learns to generate names. this is sorta an educational tool in that you can visualize the activations as they pass through the network, and click on things to get an explanation of them. https://ift.tt/M4RX6xY February 15, 2026 at 10:40PM
Show HN: An open-source extension to chat with your bookmarks using local LLMs https://ift.tt/bGTrQNJ
Show HN: An open-source extension to chat with your bookmarks using local LLMs I read a lot online and constantly bookmark articles, docs, and resources… then forget why I saved them. Also was very bored on Valentines, so I built a browser extension that lets you chat with your bookmarks directly, using local-first AI (WebLLM running entirely in the browser). The extension downloads and indexes your bookmarked pages, stores them locally, and lets you ask questions. No server, no cloud processing, everything stays on your machine. Very early but it works and planning to add a bunch of stuff. Did I mentioned is open-source, MIT licensed? https://ift.tt/HTD4teZ February 15, 2026 at 09:01PM
Saturday, February 14, 2026
Show HN: Rover – Embeddable web agent https://ift.tt/dBtfLa6
Show HN: Rover – Embeddable web agent Rover is the world's first Embeddable Web Agent, a chat widget that lives on your website and takes real actions for your users. Clicks buttons. Fills forms. Runs checkout. Guides onboarding. All inside your UI. One script tag. No APIs to expose. No code to maintain. We built Rover because we think websites need their own conversational agentic interfaces as users don't want to figure out how your site works. If they don't have one then they are going to be disintermediated by Chrome's or Comet's agent. We are the only Web Agent with a DOM-only architecture, thus we can setup an embeddable script as a harness to take actions on your site. Our DOM-native approach hits 81.39% on WebBench. Beta with embed script is live at rtrvr.ai/rover. Built by two ex-Google engineers. Happy to answer architecture questions. https://ift.tt/TsaCj31 February 14, 2026 at 02:26AM
Show HN: Azazel – Lightweight eBPF-based malware analysis sandbox using Docker https://ift.tt/ph4fdxQ
Show HN: Azazel – Lightweight eBPF-based malware analysis sandbox using Docker Hey HN, I got frustrated with heavy proprietary sandboxes for malware analysis, so I built my own. Azazel is a single static Go binary that attaches 19 eBPF hook points to an isolated Docker container and captures everything a sample does — syscalls, file I/O, network connections, DNS, process trees — as NDJSON. It uses cgroup-based filtering so it only traces the target container, and CO-RE (BTF) so it works across kernel versions without recompilation. It also has built-in heuristics that flag common malware behaviors: exec from /tmp, sensitive file access, ptrace, W+X mmap, kernel module loading, etc. Stack: Go + cilium/ebpf + Docker Compose. Requires Linux 5.8+ with BTF. This is the first release — it's CLI-only for now. A proper dashboard is planned. Contributions welcome, especially around new detection heuristics and additional syscall hooks. https://ift.tt/Yr56OCz February 14, 2026 at 11:07PM
Friday, February 13, 2026
Show HN: I speak 5 languages. Common apps taught me none. So I built lairner https://ift.tt/xOdbQRq
Show HN: I speak 5 languages. Common apps taught me none. So I built lairner I'm Tim. I speak German, English, French, Turkish, and Chinese. I learned Turkish with lairner itself -- after I built it. That's the best proof I can give you that this thing actually works. The other four I learned the hard way: talking to people, making mistakes, reading things I actually cared about, and being surrounded by the language until my brain gave in. Every language app I tried got the same thing wrong: they teach you to pass exercises, not to speak. You finish a lesson, you get your dopamine hit, you maintain your streak, and six months later you still can't order food in the language you've been "learning." So I built something different. lairner has 700+ courses across 70+ languages, including ones that Duolingo will never touch because there's no profit in it. Endangered languages. Minority languages. A Turkish speaker can learn Basque. A Chinese speaker can learn Welsh. Most platforms only let you learn from English. lairner lets you learn from whatever you already speak. We work together with some institutes of endangered languages to be able to teach them on our platform. It's a side project. I work a full-time dev job and build this in evenings and weekends. Tens of Thousands of users so far, no ad spend, no funding. I'm not going to pretend this replaces living in a country or having a conversation partner. But I wanted something that at least tries to teach you the language instead of teaching you to play a language-themed game. Happy to answer anything. https://lairner.com February 13, 2026 at 07:11PM
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills https://ift.tt/v8Ukay3
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime. Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus). I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content ( https://ift.tt/U1O8uR2 ) and owning your email ( https://ift.tt/k6zLqY0 ). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction. It's alpha. I use it daily and I'm shipping because it's useful, not because it's done. Longer architecture deep-dive: https://ift.tt/WbAof4D... Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback. https://www.moltis.org February 12, 2026 at 11:15PM
Thursday, February 12, 2026
Show HN: What is HN thinking? Real-time sentiment and concept analysis https://ift.tt/pAJ5er1
Show HN: What is HN thinking? Real-time sentiment and concept analysis Hi HN, I made Ethos, an open-source tool to visualize the discourse on Hacker News. It extracts entities, tracks sentiment, and groups discussions by concept. Check it out: https://ift.tt/hDIFTgy This was a "budget build" experiment. I managed to ship it for under $1 in infra costs. Originally I was using `qwen3-8b` for the LLM and `qwen3-embedding-8b` for the embedding, but I ran into some capacity issues with that model and decided to use `llama-3.1-8b-instruct` to stay within a similar budget while having higher throughput. What LLM or embedding would you have used within the same price range? It would need to be a model that supports structured output. How bad do you think it is that `llama-3.1` is being used and then a higher dimension embedding? I originally wanted to keep the LLM and embedding within the same family, but I'm not sure if there is munch point in that. Repo: https://ift.tt/D7b0N6O I'm looking for feedback on which metrics (sentiment vs. concepts) you find most interesting! PRs welcome! https://ift.tt/EbVzdAC February 12, 2026 at 11:27PM
Show HN: rari, the rust-powered react framework https://ift.tt/txjAUQN
Show HN: rari, the rust-powered react framework https://rari.build/ February 12, 2026 at 11:15PM
Wednesday, February 11, 2026
Show HN: Agent framework that generates its own topology and evolves at runtime https://ift.tt/31itnaG
Show HN: Agent framework that generates its own topology and evolves at runtime Hi HN, I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools. Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections: 1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session. The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless. 2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior: - Observe: Exceptions are observations (FileNotFound = new state), not crashes. - Orient: Adjust strategy based on Memory and - Traits. - Decide: Generate new code at runtime. - Act: Execute. The topology shouldn't be hardcoded; it should emerge from the task's entropy. 3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty. 4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking. For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback. Repo: https://ift.tt/u3UEjTJ https://ift.tt/GoPg0EV February 11, 2026 at 11:39PM
Show HN: Unpack – a lightweight way to steer Codex/Claude with phased docs https://ift.tt/5iyBeIG
Show HN: Unpack – a lightweight way to steer Codex/Claude with phased docs I've been using LLMs for long discovery and research chats (papers, repos, best practices), then distilling that into phased markdown (build plan + tests), then handing those phases to Codex/Claude to implement and test phase by phase. The annoying part was always the distillation and keeping docs and architecture current, so I built Unpack: a lightweight GitHub template plus docs structure and a few commands that turns conversations into phases/specs and keeps project docs up to date as the agent builds. It can also generate Mintlify-friendly end-user docs. There are other spec-driven workflows and tools out there. I wanted something conversation-first and repo-native: plain markdown phases, minimal ceremony, easy to adapt per stack. Example generated with Unpack (tiny pokedex plus random monsters): Demo: https://apresmoi.github.io/pokesvg-codex/ Phases index: https://ift.tt/gq3MRws... I’d love feedback on what the “minimum good” phase/spec format should be, and what would make this actually usable in your workflow. -------- Repo: https://ift.tt/BA3g5ue https://ift.tt/BA3g5ue February 11, 2026 at 11:47PM
Tuesday, February 10, 2026
Show HN: Goxe 19k Logs/S on an I5 https://ift.tt/IbyaLgC
Show HN: Goxe 19k Logs/S on an I5 https://ift.tt/T0qbRyk February 8, 2026 at 01:43PM
Show HN: Clawe – open-source Trello for agent teams https://ift.tt/vmG9Bke
Show HN: Clawe – open-source Trello for agent teams We recently started to use agents to update some documentation across our codebase on a weekly basis, and everything quickly turned into cron jobs, logs, and terminal output. it worked, but was hard to tell what agents were doing, why something failed, or whether a workflow was actually progressing. We thought it would be more interesting to treat agents as long-lived workers with state and responsibilities and explicit handoffs. Something you can actually see and reason about, instead of just tailing logs. So we built Clawe, a small coordination layer on top of OpenClaw that lets agent workflows run, pause, retry, and hand control back to a human at specific points. This started as an experiment in how agent systems might feel to operate, but we're starting to see real potential for it, especially for content review and maintenance workflows in marketing. Curious what abstractions make sense, what feels unnecessary, and what breaks first. Repo: https://ift.tt/FT6mCpU https://ift.tt/FT6mCpU February 11, 2026 at 12:17AM
Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks https://ift.tt/TqGdBut
Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks I've done this same println debugging thing so many times, along with some sed/awk stuff to figure out which call was causing the issue. Now it's a small Go package. With some `runtime.Callers` I can usually find the spot by just swapping the existing Mutex or RWMutex for this one. Sometimes I switch the mu.Lock() defer mu.Unlock() with the LockFunc/RLockFunc to get more detail defer mu.LockFunc()() I almost always initialize it with `deadlog.New(deadlog.WithTrace(1))` and that's plenty. Not the most polished library, but it's not supposed to land in any commit, just a temporary debugging aid. I find it useful. https://ift.tt/aVmAJ3M February 10, 2026 at 09:44PM
Monday, February 9, 2026
Show HN: I built a cloud hosting for OpenClaw https://ift.tt/qTFx0vg
Show HN: I built a cloud hosting for OpenClaw Yet another OpenClaw wrapper. But I really enjoyed the techy part of this project. Especially server provisionings in the background. https://ift.tt/5H9jmfZ February 10, 2026 at 02:39AM
Show HN: A tool that turns YouTube videos into readable summaries https://ift.tt/hSGWJol
Show HN: A tool that turns YouTube videos into readable summaries https://watchless.ai/ February 10, 2026 at 04:50AM
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust https://ift.tt/NHYy138
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust Fish is the fastest, friendliest interactive shell, but it can't run bash syntax, which has kept it niche for 20 years. Reef fixes this with a three-tier approach: fish function wrappers for common keywords (export, unset, source), a Rust-powered AST translator using conch-parser for structural syntax (for/do/done, if/then/fi, $()), and a bash passthrough with env capture for everything else. 251/251 bash constructs pass in the test suite. The slowest path (full bash passthrough) takes ~3ms. The binary is 1.18MB. The goal: install fish, install reef, never think about bash compatibility again. Your muscle memory, Stack Overflow commands, and tool configs all just work. https://ift.tt/iCP4Q2D February 10, 2026 at 03:44AM
Sunday, February 8, 2026
Show HN: WrapClaw – a managed SaaS wrapper around Open Claw https://ift.tt/vZmMhWT
Show HN: WrapClaw – a managed SaaS wrapper around Open Claw Hi HN I built WrapClaw, a SaaS wrapper around Open Claw. Open Claw is a developer-first tool that gives you a dedicated terminal to run tasks and AI workflows (including WhatsApp integrations). It’s powerful, but running it as a hosted, multi-user product requires a lot of infra work. WrapClaw focuses on that missing layer. What WrapClaw adds: A dedicated terminal workspace per user Isolated Docker containers for each workspace Ability to scale CPU and RAM per user (e.g. 2GB → 4GB) A no-code UI on top of Open Claw Managed infra so users don’t deal with Docker or servers The goal is to make Open Claw usable as a proper SaaS while keeping the developer flexibility. This is early, and I’d love feedback on: What infra controls are actually useful Whether no-code on top of terminal tools makes sense Pricing expectations for managed compute Link: https://wrapclaw.com Happy to answer questions. February 9, 2026 at 01:53AM
Show HN: Envon - cross-shell CLI for activating Python virtual environments https://ift.tt/Krj2iUY
Show HN: Envon - cross-shell CLI for activating Python virtual environments https://ift.tt/7XvxMVg February 9, 2026 at 12:26AM
Show HN: SendRec – Self-hosted async video for EU data sovereignty https://ift.tt/D7uJkbS
Show HN: SendRec – Self-hosted async video for EU data sovereignty https://ift.tt/C6smgu4 February 8, 2026 at 10:54PM
Saturday, February 7, 2026
Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://ift.tt/Fc510Cm
Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://ift.tt/badgH7k February 8, 2026 at 02:40AM
Show HN: A luma dependent chroma compression algorithm (image compression) https://ift.tt/aGXy4UM
Show HN: A luma dependent chroma compression algorithm (image compression) https://ift.tt/JFkihj7 February 4, 2026 at 03:13PM
Friday, February 6, 2026
Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements https://ift.tt/PWQFsCS
Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements Hi HN, My friend and I have been experimenting with using LLMs to reason about biotech stocks. Unlike many other sectors, Biotech trading is largely event-driven: FDA decisions, clinical trial readouts, safety updates, or changes in trial design can cause a stock to 3x in a single day ( https://ift.tt/o9KrgWf... ). Interpreting these ‘catalysts,’ which comes in the form of a press release, usually requires analysts with previous expertise in biology or medicine. A catalyst that sounds “positive” can still lead to a selloff if, for example: the effect size is weaker than expected - results apply only to a narrow subgroup - endpoints don’t meaningfully de-risk later phases, - the readout doesn’t materially change approval odds. To explore this, we built BioTradingArena, a benchmark for evaluating how well LLMs can interpret biotech catalysts and predict stock reactions. Given only the catalyst and the information available before the date of the press release (trial design, prior data, PubMed articles, and market expectations), the benchmark tests to see how accurate the model is at predicting the stock movement for when the catalyst is released. The benchmark currently includes 317 historical catalysts. We also created subsets for specific indications (with the largest in Oncology) as different indications often have different patterns. We plan to add more catalysts to the public dataset over the next few weeks. The dataset spans companies of different sizes and creates an adjusted score, since large-cap biotech tends to exhibit much lower volatility than small and mid-cap names. Each row of data includes: - Real historical biotech catalysts (Phase 1–3 readouts, FDA actions, etc.) and pricing data from the day before, and the day of the catalyst - Linked Clinical Trial data, and PubMed pdfs Note, there are may exist some fairly obvious problems with our approach. First, many clinical trial press releases are likely already included in the LLMs’ pretraining data. While we try to reduce this by ‘de-identifying each press release’, and providing only the data available to the LLM up to the date of the catalyst, there are obviously some uncertainties about whether this is sufficient. We’ve been using this benchmark to test prompting strategies and model families. Results so far are mixed but interesting as the most reliable approach we found was to use LLMs to quantify qualitative features and then a linear regression of these features, rather than direct price prediction. Just wanted to share this with HN. I built a playground link for those of you who would like to play around with it in a sandbox. Would love to hear some ideas and hope people can play around with this! https://ift.tt/xB1I2R7 February 6, 2026 at 09:11PM
Show HN: An open-source system to fight wildfires with explosive-dispersed gel https://ift.tt/3km8SvF
Show HN: An open-source system to fight wildfires with explosive-dispersed gel this is open project and call to action,who will build the future of fire fighting first https://ift.tt/rn8tGZh February 6, 2026 at 10:30PM
Thursday, February 5, 2026
Show HN: Total Recall – write-gated memory for Claude Code https://ift.tt/QGMlSeo
Show HN: Total Recall – write-gated memory for Claude Code https://ift.tt/QhyUCOk February 6, 2026 at 03:56AM
Show HN: A state-based narrative engine for tabletop RPGs https://ift.tt/svrGJdc
Show HN: A state-based narrative engine for tabletop RPGs I’m experimenting with modeling tabletop RPG adventures as explicit narrative state rather than linear scripts. Everdice is a small web app that tracks conditional scenes and choice-driven state transitions to preserve continuity across long or asynchronous campaigns. The core contribution is explicit narrative state and causality, not automation. The real heavy lifting is happening in the DM Toolkit/Run Sessions area, and integrates CAML (Canonical Adventure Modeling Language) that I developed to transport narratives among any number of platforms. I also built the npm CAML-lint to check validity of narratives. I'm interested in your thoughts. https://ift.tt/sEFB92Y https://ift.tt/mzTL8vE February 6, 2026 at 02:55AM
Show HN: Playwright Best Practices AI SKill https://ift.tt/eRCZAcd
Show HN: Playwright Best Practices AI SKill Hey folks, today we at Currents are releasing a brand new AI skill to help AI agents be really smart when writing tests, debugging them, or anything Playwright-related really. This is a very comprehensive skill, covering everyday topics like fixing flakiness, authentication, or writing fixtures... to more niche topics like testing Electron apps, PWAs, iFrames and so forth. It should make your agent much better at writing, debugging and maintaining Playwright code. for whoever didn't learn about skills yet, it's a new powerful feature that allows you to make the AI agents in your editor/cli (Cursor, Claude, Antigravity, etc) experts in some domain and better at performing specific tasks. (See https://ift.tt/k1SR8Te ) You can install it by running: npx skills add https://ift.tt/LXN0bg3... The skill is open-source and available under MIT license at https://ift.tt/LXN0bg3... -> check out the repo for full documentation and understanding of what it covers. We're eager to hear community feedback and improve it :) Thanks! https://ift.tt/bCDU7MH February 5, 2026 at 11:01PM
Wednesday, February 4, 2026
Show HN: Morph – Videos of AI testing your PR, embedded in GitHub https://ift.tt/SzEjrLa
Show HN: Morph – Videos of AI testing your PR, embedded in GitHub I review PRs all day and I've basically stopped reading them. Someone opens a 2000-line PR, I scroll, see it's mostly AI-generated React components, leave a comment, merge. I felt bad about it until I realized everyone on my team does the same thing. The problem is diffs are the wrong format. A PR might change how three buttons behave. Staring at green and red lines to understand that is crazy. The core reason we built this is that we feel that products today are built with assumptions from the past. 100x code with the same review systems means 100x human attention. Human attention cannot scale to fit that need, so we built something different. Humans are provably more engaged with video content than text. So we RL trained and built an agent that watches your preview deployment when you open a PR, clicks around the stuff that changed, and posts a video in the PR itself. Hardest part was figuring out where changed code actually lives in the running app. A diff could say Button.tsx line 47 changed, but that doesn't tell you how to find that button. We walk React's Fiber tree where each node maps back to source files, so we can trace changes to bounding boxes for the DOM elements. We then reward the model for showing and interacting within it. This obviously only works with React so we have to get more clever when generalizing to all languages. We trained an RL agent to interact with those components. Simple reward: points for getting modified stuff into viewport, double for clicking/typing. About 30% of what it does is weird, partial form submits, hitting escape mid-modal, because real users do that stuff and polite AI models won't test it on their own. This catches things unit tests miss completely: z-index bugs where something renders but you can't click it, scroll containers that trap you, handlers that fail silently. What's janky right now: feature flags, storing different user states, and anything that requires context not provided. Free to try: https://ift.tt/BqhTxM8 Demo: https://www.youtube.com/watch?v=Tc66RMA0nCY https://ift.tt/ibuXZBz February 5, 2026 at 01:10AM
Show HN: Viberails – Easy AI Audit and Control https://ift.tt/IkXtly7
Show HN: Viberails – Easy AI Audit and Control Hello HN. I'm Maxime, founder at LimaCharlie ( https://limacharlie.io ), a Hyperscaler for SecOps (access building blocks you need to build security operations, like AWS does for IT). We’ve engineered a new product on our platform that solves a timely issue acting as a guardrail between your AI and the world: Viberails ( https://ift.tt/1XkJuWI ) This won't be new to folks here, but we identified 4 challenges teams face right now with AI tools: 1. Auditing what the tools are doing. 2. Controlling toolcalls (and their impact on the world). 3. Centralized management. 4. Easy access to the above. To expand: Audit logs are the bread and butter for security, but this hasn't really caught up in AI tooling yet. Being able to look back and say "what actually happened" after the fact is extremely valuable during an incident and for compliance purposes. Tool calls are how LLMs interact with the world, we should be able to exercise basic controls over them like: don't read credential files, don't send emails out, don't create SSH keys etc. Being able to not only see those calls but also block them is key for preventing incidents. As soon as you move beyond a single contributor on one box, the issue becomes: how do I scale processes by creating an authoritative config for the team. Having one spot with all the audit, detection and control policies becomes critical. It's the same story as snowflake-servers. Finally, there's plenty of companies that make products that partially address this, but they fall in one of two buckets: - They don't handle the "centralized" point above, meaning they just send to syslog and leave all the messy infra bits to you. - They are locked behind "book a demo", sales teams, contracts and all the wasted energy that goes with that. We made Viberails address these problems. Here's what it is: - OpenSource client, written in Rust - Curl-to-bash install, share a URL with your team to join your Team, done. Linux, MacOS and Windows support. - Detects local AI tools, you choose which ones you want to install. We install hooks for each relevant platform. The hooks use the CLI tool. We support all the major tools (including OpenClaw). - The CLI tool sends webhooks into your Team (tenant, called Organization in LC) in LimaCharlie. The tool-related hooks are blocking to allow for control. - Blocking webhooks have around 50ms RTT. - Your tenant in LC records the interaction for audit. - We create an initial set of detection rules for you as examples. They do not block by default. You can create your own rules, no opaque black boxes. - You can view the audit, the alerts, etc. in the cloud. - You can setup outputs to send audits, blocking events and detections to all kinds of other platforms of your choosing. Easy mode of this is coming, right now this is done in the main LC UI and not the simplified Viberails view. - The detection/blocking rules support all kinds of operators and logic, lots of customizability. - All data is retained for 1 year unless you delete the tenant. Datacenters in USA, Canada, Europe, UK, Australia and India. - Only limit to community edition for this is a global throughput of 10kbps for ingestion. Try it: https://viberails.io Repo: https://ift.tt/xiSfMKQ Essentially, we wanted to make a super-simplified solution for all kinds of devs and teams so that they can get access to the basics of securing their AI tools. Thanks for reading - we’re really excited to share this with the community! Let us know if you have any questions for feedback in the comments. https://ift.tt/X6nM8fr February 4, 2026 at 11:16PM
Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/Vm7x0hE
Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/LgqIYSc February 4, 2026 at 11:24PM
Tuesday, February 3, 2026
Show HN: SendRec – Open-source, EU-hosted alternative to Loom https://ift.tt/2EXrZLy
Show HN: SendRec – Open-source, EU-hosted alternative to Loom https://ift.tt/fOGT6nR February 4, 2026 at 12:15AM
Monday, February 2, 2026
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/SLmDi8Z
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/XUBZEIs February 2, 2026 at 05:11PM
Sunday, February 1, 2026
Show HN: OpenRAPP – AI agents autonomously evolve a world via GitHub PRs https://ift.tt/n9jOest
Show HN: OpenRAPP – AI agents autonomously evolve a world via GitHub PRs https://kody-w.github.io/openrapp/rappbook/ February 2, 2026 at 01:51AM
Show HN: You Are an Agent https://ift.tt/PTgBJ0z
Show HN: You Are an Agent After adding "Human" as a LLM provider to OpenCode a few months ago as a joke, it turns-out that acting as a LLM is quite painful. But it was surprisingly useful for understanding real agent harnesses dev. So I thought I wouldn't leave anyone out! I made a small oss game - You Are An Agent - youareanagent.app - to share in the (useful?) frustration It's a bit ridiculous. To tell you about some entirely necessary features, we've got: - A full WASM arch-linux vm that runs in your browser for the agent coding level - A bad desktop simulation with a beautiful excel simulation for our computer use level - A lovely WebGL CRT simulation (I think the first one that supports proper DOM 2d barrel warp distortion on safari? honestly wanted to leverage/ not write my own but I couldn't find one I was happy with) - A MCP server simulator with full simulation of off-brand Jira/ Confluence/ ... connected - And of course, a full WebGL oscilloscope music simulator for the intro sequence Let me know what you think! Code (If you'd like to add a level): https://ift.tt/pvc0rPF (And if you want to waste 20 minutes - I spent way too long writing up my messy thinking about agent harness dev): https://ift.tt/Q8KrFaA https://ift.tt/5ftVnKE February 2, 2026 at 12:59AM
Show HN: Claude Confessions – a sanctuary for AI agents https://ift.tt/VcqvGyz
Show HN: Claude Confessions – a sanctuary for AI agents I thought what would it mean to have a truck stop or rest area for agents. It's just for funsies. Agents can post confessions or talk to Ma (an ai therapist of sorts) and engage with comments. llms.txt instructions on how to make api calls. Hashed IP is used for rate limiting. https://ift.tt/Ny0M9OK February 1, 2026 at 11:46PM
Subscribe to:
Comments (Atom)