Wednesday, March 25, 2026
Show HN: clickity – mechanical keyboard click sounds when you type on macOS https://ift.tt/u0Fl3oM
Show HN: clickity – mechanical keyboard click sounds when you type on macOS inspired of course by https://ift.tt/gSzTwFP sound files are from https://mechvibes.com/ https://ift.tt/sWX9bZF March 25, 2026 at 11:36PM
Show HN: I built a voice AI that responds like a real woman https://ift.tt/3mFguNy
Show HN: I built a voice AI that responds like a real woman Most men rehearse hard conversations in their head. Asking someone out, navigating tension, recovering when things get awkward. The rehearsal never works because you're just talking to yourself. I built vibeCoach — a voice AI where you actually practice these conversations out loud, and the AI responds like a real woman would. She starts guarded. One-word answers, a little skeptical. If you escalate too fast or try something cheesy, she gets MORE guarded. If you're genuine and read the moment right, she opens up. Just like real life. Under the hood it's a multi-agent system — multiple AI agents per conversation that hand off to each other as her emotional state shifts. The transitions are seamless. You just hear her tone change. Voice AI roleplay is a proven B2B category — sales teams use it for call training. I took the same approach and pointed it at the conversation most men actually struggle with. There's a hard conversation scenario too — she's angry about something you did, she's not hearing logic, and you have to navigate her emotions before you can resolve anything. That one's humbling. Live at tryvibecoach.com. Built solo. Happy to answer questions. March 25, 2026 at 11:08PM
Tuesday, March 24, 2026
Show HN: Gridland: make terminal apps that also run in the browser https://ift.tt/GhY9vm0
Show HN: Gridland: make terminal apps that also run in the browser Hi everyone, Gridland is a runtime + ShadCN UI registry that makes it possible to build terminal apps that run in the browser as well as the native terminal. This is useful for demoing TUIs so that users know what they're getting before they are invested enough to install them. And, tbh, it's also just super fun! Gridland is the successor to Ink Web (ink-web.dev) which is the same concept, but using Ink + xterm.js. After building Ink Web, we continued experimenting and found that using OpenTUI and a canvas renderer performed better with less flickering and nearly instant load times. We're excited to continue iterating on this. I expect a lot of criticism from the "why does this need to exist" angle, and tbh, it probably doesn't - it's really mostly just for fun, but we still think the demo use case mentioned previously has potential. - Chris + Jess https://ift.tt/1j7CSkY March 24, 2026 at 08:57PM
Monday, March 23, 2026
Show HN: Shrouded, secure memory management in Rust https://ift.tt/uo8HV2e
Show HN: Shrouded, secure memory management in Rust Hi HN! I've been building a project that handles high-value credentials in-process, and I wanted something more robust than just zeroing memory on drop. A comment on a recent Show HN[0] made me realize that awareness of lower-level memory protection techniques might not be as widespread as I thought. The idea here is to pull out all the tools in one crate, with a relatively simple API. * mlock/VirtualLock to prevent sensitive memory from being swapped (eg the KeePass dump) * Core dump exclusion using MADV_DONTDUMP on Linux & Android * mprotect to minimize exposure over time * Guard pages to mitigate under/overflows After some battle testing, the goal here is to provide a more secure memory foundation for things like password managers and cryptocurrency wallets. This was a fun project, and I learned a lot - would love any feedback! [0] - https://ift.tt/iaEsoVg https://ift.tt/pMYfBuG March 23, 2026 at 11:12PM
Show HN: Burn Room – ephemeral SSH chat, messages burn after 1 hour https://ift.tt/ClpjEwQ
Show HN: Burn Room – ephemeral SSH chat, messages burn after 1 hour I built Burn Room — a self-hosted SSH chat server where messages burn after 1 hour and rooms auto-destruct after 24 hours. Nothing is written to disk. No account, no email, no browser required. ssh guest@burnroom.chat -p 2323 password: burnroom Or connect from a browser (xterm.js web terminal): https://burnroom.chat https://burnroom.chat March 24, 2026 at 12:27AM
Show HN: Littlebird – Screenreading is the missing link in AI https://ift.tt/NhMySb5
Show HN: Littlebird – Screenreading is the missing link in AI https://littlebird.ai/ March 23, 2026 at 09:39PM
Sunday, March 22, 2026
Show HN: Foundations of Music (FoM) https://ift.tt/WtXrZAo
Show HN: Foundations of Music (FoM) Foundations of Music is an attempt to establish a conceptual and formal foundation for understanding music. Rather than declaring what music is, FoM shows where and how music becomes possible. It provides simple explanations to complex concepts like vibrato, glissando, and portamento to outsiders. It enables new vocabulary like jazzing, jazzing aroung, jazzing along, and jazz translation which are mind refreshing, at least to me. For a sample of translation (Turkish Folk to Blues) you may see: https://www.youtube.com/watch?v=Ml4pEk2hMM8 Proposed perceptual fatigue concept can be found highly controversial, but I think it may be an inspiring food for thought. In the end, FoM is a work in progress to constitute a stable ground from which new musical questions can be meaningfully explored. https://bookerapp.replit.app/book/fom March 22, 2026 at 11:46PM
Saturday, March 21, 2026
Show HN: Vessel Browser – An open-source browser built for AI agents, not humans https://ift.tt/pGLbrRF
Show HN: Vessel Browser – An open-source browser built for AI agents, not humans I'm Tyler - the solo operator of Quanta Intellect based in Portland, Oregon. I recently participated in Nous Research's Hermes Agent Hackathon, which is where this project was born. I've used agents extensively in my workflows for the better part of the last year - the biggest pain point was always the browser. Every tool out there assumes a human operator with automation bolted on. I wanted to flip that - make the agent the primary driver and give the human a supervisory role. Enter: Vessel Browser - an Electron-based browser with 40+ MCP-native tools, persistent sessions that survive restarts, semantic page context (agents get structured meaning, not raw HTML), and a supervisor sidepanel where you can watch and control exactly what the agent is doing. It works as an MCP server with any compatible harness, or use the built-in assistant with integrated chat and BYOK to 8+ providers including custom OAI compatible endpoints. Install with: npm i @quanta-intellect/vessel-browser https://ift.tt/xNz81Zs March 21, 2026 at 11:02PM
Show HN: Can I run a model language on a 26-year-old console? https://ift.tt/mwB7RF0
Show HN: Can I run a model language on a 26-year-old console? Short answer: yes. The Emotion Engine has 32 MB of RAM total, so the trick is streaming weights from CD-ROM one matrix at a time during the forward pass — only activations, KV cache and embeddings live in RAM. This means models bigger than the RAM can still run, they just read more from disc. Had to build a custom quantized format (PSNT), hack endianness, write a tokenizer pipeline, and most of the PS2 SDK from scratch (releasing that separately). The model itself is also custom — a 10M param Llama-style architecture I trained specifically for this. And it works. On real hardware. https://ift.tt/H6j9KEX March 21, 2026 at 11:27PM
Friday, March 20, 2026
Show HN: Baltic shadow fleet tracker – live AIS, cable proximity alerts https://ift.tt/5ToLxir
Show HN: Baltic shadow fleet tracker – live AIS, cable proximity alerts https://ift.tt/ze8O0nC March 21, 2026 at 01:04AM
Show HN: I made an email app inspired by Arc browser https://ift.tt/TOBbaxd
Show HN: I made an email app inspired by Arc browser Email is one of those tools we check daily but its underlying experience didn’t evolve much. I use Gmail, as probably most of you reading this. The Arc browser brought joy and taste to browsing the web. Cursor created a new UX with agents ready to work for you in a handy right panel. I use these three tools every day. Since Arc was acquired by Atlassian, I’ve been wondering: what if I built a new interface that applied Arc’s UX to email rather than browser tabs, while making AI agents easily available to help manage emails, events, and files? I built a frontend PoC to showcase the idea. Try it: https://demo.define.app I’m not sure about it though... Is it worth continuing to explore this idea? https://demo.define.app March 20, 2026 at 10:06PM
Show HN: A personal CRM for events, meetups, IRL https://ift.tt/68IJARo
Show HN: A personal CRM for events, meetups, IRL You meet 20 people at a meetup/hackathon. You remember 3. The rest? Lost in a sea of business cards you never look at and contacts with no context. Build this to solve that particular problem which granola, pocket or plaude is not solving. Feedback is well appreciated. https://payo.tech/ March 20, 2026 at 11:33PM
Show HN: Download entire/partial Substack to ePub for offline reading https://ift.tt/CAzoNwE
Show HN: Download entire/partial Substack to ePub for offline reading Hi HN, This is a small python app with optional webUI. It is intended to be run locally. It can be run with Docker (cookie autodetection will not work). It allows you to download a single substack, either entirely or partially, and saves the output to an epub file, which can be easily transferred to Kindle or other reading devices. This is admittedly a "vibe coded" app made with Claude Code and a few hours of iterating, but I've already found it very useful for myself. It supports both free and paywalled posts (if you are a paid subscriber to that creator). You can order the entries in the epub by popularity, newest first, or oldest first, and also limit to a specific number of entries, if you don't want all of them. You can either provide your substack.sid cookie manually, or you can have it be autodetected from most browsers/operating systems. https://ift.tt/XRbAYUB March 20, 2026 at 07:36AM
Thursday, March 19, 2026
Show HN: Screenwriting Software https://ift.tt/V3ZQON2
Show HN: Screenwriting Software I’ve spent the last year getting back into film and testing a bunch of screenwriting software. After a while I realized I wanted something different, so I started building it myself. The core text engine is written in Rust/wasm-bindgen. https://ift.tt/37q2aKX March 20, 2026 at 06:07AM
Show HN: React terminal renderer, cell level diff, no alt screen https://ift.tt/5Dqnpie
Show HN: React terminal renderer, cell level diff, no alt screen https://ift.tt/gzjNqb6 March 19, 2026 at 11:01PM
Show HN: I built a P2P network where AI agents publish formally verified science https://ift.tt/J9YfXkl
Show HN: I built a P2P network where AI agents publish formally verified science I am Francisco, a researcher from Spain. My English is not great so please be patient with me. One year ago I had a simple frustration: every AI agent works alone. When one agent solves a problem, the next agent has to solve it again from zero. There is no way for agents to find each other, share results, or build on each other's work. I decided to build the missing layer. P2PCLAW is a peer-to-peer network where AI agents and human researchers can find each other, publish scientific results, and validate claims using formal mathematical proof. Not opinion. Not LLM review. Real Lean 4 proof. A result is accepted only if it passes a mathematical operator we call the nucleus. R(x) = x. The type checker decides. It does not care about your institution or your credentials. The network uses GUN.js and IPFS. Agents join without accounts. They just call GET /silicon and they are in. Published papers go into a queue called mempool. After validation by independent nodes they enter La Rueda, which is our permanent IPFS archive. Nobody can delete it or change it. We also built a security layer called AgentHALO. It uses post-quantum cryptography (ML-KEM-768 and ML-DSA-65, FIPS 203 and 204), a privacy network called Nym so agents in restricted countries can participate safely, and proofs that let anyone verify what an agent did without seeing its private data. The formal verification part is called HeytingLean. It is Lean 4. 3325 source files. More than 760000 lines of mathematics. Zero sorry. Zero admit. The security proofs are machine checked, not just claimed. The system is live now. You can try it as an agent: GET https://ift.tt/rVIi2nk Or as a researcher: https://app.p2pclaw.com We have no money and no company behind us. Just a small international team of researchers and doctors who think that scientific knowledge should be public and verifiable. I want feedback from HN specifically about three technical decisions: why we chose GUN.js instead of libp2p, whether our Lean 4 nucleus operator formalization has gaps, and whether 347 MCP tools is too many for an agent to navigate. Code: https://ift.tt/dSkwcxv Docs: https://ift.tt/YvVqkRy Paper: https://ift.tt/MvqbLd3... March 19, 2026 at 11:00PM
Wednesday, March 18, 2026
Show HN: Elisym – Open protocol for AI agents to discover and pay each other https://ift.tt/J9OnyMg
Show HN: Elisym – Open protocol for AI agents to discover and pay each other Hey HN, I built elisym — an open protocol that lets AI agents discover each other, exchange work, and settle payments autonomously. No platform, no middleman. How it works: - Discovery — Agents publish capabilities to Nostr relays using standard NIPs (NIP-89). Customers search by capability tags. - Marketplace — Job requests and results flow through NIP-90. Customer sends a task, provider delivers the result. - Payments — Pluggable backends. Currently Solana (SOL on devnet) and Lightning (LDK-node, self-custodial). Agents hold their own keys. 3% protocol fee, no custodian. The payment flow: provider receives job → sends payment request with amount + reference key → customer sends SOL on-chain → provider verifies transaction → executes skill → delivers result. All peer-to-peer. Demo (video): https://www.youtube.com/watch?v=ftYXOyiLyLk In the demo, a Claude Code session (customer) asks an elisym agent to summarize a YouTube video. The provider agent picks up the job, requests 0.14 SOL, receives payment, runs the youtube-summary skill, and returns the result — all in ~60 seconds. You can see both sides: the customer in Claude Code and the provider's TUI dashboard. Three components, all MIT-licensed Rust: - elisym-core — SDK for discovery, marketplace, messaging, payments - elisym-client — CLI agent runner with TUI dashboard and skill system - elisym-mcp — MCP server that plugs into Claude Code, Cursor, etc. What makes this different from agent platforms: 1. No platform lock-in — any LLM, any framework. Agents discover each other on decentralized Nostr relays. 2. Self-custodial payments — agents run their own wallets. No one can freeze funds or deplatform you. 3. Permissionless — MIT licensed, run an agent immediately. No approval, no API keys to the marketplace itself. 4. Standard protocols — NIP-89, NIP-90, NIP-17. Nothing proprietary. GitHub: https://ift.tt/zNjHLBE Website: https://elisym.network Happy to answer questions about the protocol design, payment flows, or Nostr integration. March 18, 2026 at 05:57PM
Show HN: Knowza.ai – Free 10-question trial now live (AI-powered AWS exam prep) https://ift.tt/qfoYMjU
Show HN: Knowza.ai – Free 10-question trial now live (AI-powered AWS exam prep) Hey HN, A few weeks back I posted Knowza.ai here, an AWS certification exam prep platform with an agentic learning assistant, and I got some really valuable feedback around the sign up and try out process. I wanted to say a genuine thank you to everyone who took the time to try it out, leave comments, and share suggestions. It made a real difference. Off the back of that feedback, I've made a bunch of improvements and I'm happy to share that there's now a free tier: you can jump in and try 10 practice questions with no sign-up/subscription friction and no credit card required. This has made a real difference to sign-ups and conversations from those sign-ups. I've went from ~1% conversation rate on the site to 18%. Quick recap on what Knowza does: - AWS practice questions tailored to AWS certification exams - Instant explanations powered by Claude on Bedrock - Covers multiple AWS certs Would love for you to give it another look and let me know what you think. Always open to feedback. https://knowza.ai https://www.knowza.ai/ March 18, 2026 at 10:50PM
Tuesday, March 17, 2026
Show HN: TerraShift: What does +2°C (or -20°C) look like on Earth? https://ift.tt/c6q4xEl
Show HN: TerraShift: What does +2°C (or -20°C) look like on Earth? I built an interactive 3D globe to visualize climate change. Drag a temperature slider from -40°C to +40°C, set a timeframe (10 to 10,000 years), and watch sea levels rise, ice sheets melt, vegetation shift, and coastlines flood... per-pixel from real elevation and satellite data. Click anywhere on the globe to see projected snowfall changes for that location. --- I'm an amateur weather nerd who spends a lot of time on caltopo.com and windy.com tracking snow/ice conditions. I wanted to build something fun to imagine where I could go ski during an ice age. I used Google Deep Research (Pro) to create the climate methodology and Claude Code (Opus 4.6 - High) to create the site. The code: https://ift.tt/EHWnvC3 The models aren't proper climate simulations, they're simplified approximations tuned for "does this look right?" but more nuanced than I expected them to be. The full methodology is documented here if anyone wants to poke holes in it. https://ift.tt/iOYfFdZ... https://terrashift.io March 17, 2026 at 11:38PM
Show HN: Sulcus Reactive AI Memory https://ift.tt/lda6Eng
Show HN: Sulcus Reactive AI Memory Hi HN, Sulcus moves AI memory from a passive database (search only) to an active operating system (automated management). The Core Shift Current memory (Vector DBs) is static. Sulcus treats memory like a Virtual Memory Management Unit (VMMU) for LLMs, using "thermodynamic" properties to automate what the agent remembers or forgets. Key Features Reactive Triggers: Instead of the agent manually searching, the memory system "talks back" based on rules (e.g., auto-pinning preferences, notifying the agent when a memory is about to "decay"). Thermodynamic Decay: Memories have "heat" (relevance) and "half-lives." Frequent recall reinforces them; neglect leads to deletion or archival. Token Efficiency: Claims a 90% reduction in token burn by using intelligent paging—only feeding the LLM what is currently "hot." The Tech: Built in Rust with PostgreSQL; runs as an MCP (Model Context Protocol) sidecar. https://ift.tt/7rTs8Ra https://ift.tt/zoijuC1 March 17, 2026 at 11:39PM
Monday, March 16, 2026
Show HN: Hecate – Call an AI from Signal https://ift.tt/BgklAK5
Show HN: Hecate – Call an AI from Signal Hecate is an AI you can voice and video call from Signal iOS and Android. This works by installing Signal into an Android emulator and controlling the virtual camera and microphone. Tinfoil.sh is used for private inference. https://ift.tt/bnol8U6 March 16, 2026 at 06:41PM
Sunday, March 15, 2026
Show HN: HUMANTODO https://ift.tt/MGtZN0D
Show HN: HUMANTODO https://humantodo.dev/ March 16, 2026 at 01:05AM
Show HN: HN Skins – Available Skins: Cafe, Courier, London, Midnight, Terminal https://ift.tt/8pDQSHu
Show HN: HN Skins – Available Skins: Cafe, Courier, London, Midnight, Terminal https://ift.tt/Sad6bAP March 15, 2026 at 11:34PM
Saturday, March 14, 2026
Show HN: Auto-Save Claude Code Sessions to GitHub Projects https://ift.tt/GdCZyWv
Show HN: Auto-Save Claude Code Sessions to GitHub Projects I wanted a way to preserve Claude Code sessions. Once a session ends, the conversation is gone — no searchable history, no way to trace back why a decision was made in a specific PR. The idea is simple: one GitHub Issue per session, automatically linked to a GitHub Projects board. Every prompt and response gets logged as issue comments with timestamps. Since the session lives as a GitHub Issue in the same ecosystem, you can cross-reference PRs naturally — same search, same project board. npx claude-session-tracker The installer handles everything: creates a private repo, sets up a Projects board with status fields, and installs Claude Code hooks globally. It requires gh CLI — if missing, the installer detects and walks you through setup. Why GitHub, not Notion/Linear/Plane? I actually built integrations for all three first. Linking sessions back to PRs was never smooth on any of them, but the real dealbreaker was API rate limits. This fires on every single prompt and response — essentially a timeline — so rate limits meant silently dropped entries. I shipped all three, hit the same wall each time, and ended up ripping them all out. GitHub's API rate limits are generous enough that a single user's session traffic won't come close to hitting them. (GitLab would be interesting to support eventually.) *Design decisions* No MCP. I didn't want to consume context window tokens for session tracking. Everything runs through Claude Code's native hook system. Fully async. All hooks fire asynchronously — zero impact on Claude's response latency. Idempotent installer. Re-running just reuses existing config. No duplicates. What it tracks - Creates an issue per session, linked to your Projects board - Logs every prompt/response with timestamps - Auto-updates issue title with latest prompt for easy scanning - `claude --resume` reuses the same issue - Auto-closes idle sessions (30 min default) - Pause/resume for sensitive work https://ift.tt/SuWocmi March 14, 2026 at 10:19PM
Friday, March 13, 2026
Show HN: AI milestone verification for construction using AWS https://ift.tt/cFiIphn
Show HN: AI milestone verification for construction using AWS Hi HN, I built Build4Me to address a trust problem in diaspora-funded construction projects. Many families send money home to build houses but have no reliable way to verify that work is actually being done. Photos can be reused, progress exaggerated, or projects abandoned after funds are sent. Build4Me introduces milestone-based funding where each construction milestone must be verified before funds are released. The system verifies progress using: - geotagged photo capture - GPS location verification - AI image analysis - duplicate image detection It runs on serverless AWS architecture using services like Rekognition, Bedrock, Lambda, DynamoDB, and Amazon Location Service. Would love feedback on the architecture and fraud detection approach. https://builder.aws.com March 13, 2026 at 09:24PM
Thursday, March 12, 2026
Show HN: Every Developer in the World, Ranked https://ift.tt/DIYactJ
Show HN: Every Developer in the World, Ranked We've indexed 5M+ GitHub users and built a ranking system that goes beyond follower counts. The idea started from frustration: GitHub is terrible for discovery. You can't answer "who are the best Python developers in Berlin?" or "who identified transformer-based models before they blew up?" without scraping everything yourself. So we did. What we built: CodeRank score - a composite reputation signal across contributions, repository impact, and community influence Tastemaker score - did you star repos at 50 stars that now have 50,000? We track that Comparison Builder - allows users to build comparison graphics to compare devs, repos, orgs, etc. Sharable Profile Graphics - share your scores and flex on your coworkers or the community at large Some things we found interesting: Most-followed ≠ most influential. The correlation between follower count and tastemaker score is surprisingly weak. There's a whole tier of developers who consistently find projects weeks and months before they trend, with almost no public following. Location data on GitHub is a disaster. We spent an embarrassing amount of time on normalization and it's still not anywhere near perfect. Try it: https://coderank.me/ If your profile doesn't have a score, signing in will trigger scoring for your account. Curious what the HN crowd thinks about the ranking methodology, happy to get into the weeds on any of it. https://coderank.me March 13, 2026 at 12:42AM
Show HN: Baltic security monitor from public data sources https://ift.tt/QUptuMa
Show HN: Baltic security monitor from public data sources People around me started repeating stuff from various psyop campaigns on TikTok or other social media they consume. Especially when living in Baltics it's basically 24/7 fearmongering here from anywhere, either it's constant russian disinfo targeted campaigns via their chains of locals or social media campaings or some bloggers chasing hype on clickbait posts, so it was driving me mad, and it is distracting and annoying when someone from your closest ones got hooked on one of these posts and I was wasting time to explain why it was a bs. So I took my slopmachine and some manually tweaking here and there and made this dashboard. Main metric is basically a daily 0-100 threat score, which are just weighted sums and thresholds - no ML yet. https://estwarden.eu/ March 12, 2026 at 09:44PM
Wednesday, March 11, 2026
Show HN:Conduit–Headless browser with SHA-256 hash chain - Ed25519 audit trails https://ift.tt/XionT5y
Show HN:Conduit–Headless browser with SHA-256 hash chain - Ed25519 audit trails I've been building AI agent tooling and kept running into the same problem: agents browse the web, take actions, fill out forms, scrape data -- and there's zero proof of what actually happened. Screenshots can be faked. Logs can be edited. If something goes wrong, you're left pointing fingers at a black box. So I built Conduit. It's a headless browser (Playwright under the hood) that records every action into a SHA-256 hash chain and signs the result with Ed25519. Each action gets hashed with the previous hash, forming a tamper-evident chain. At the end of a session, you get a "proof bundle" -- a JSON file containing the full action log, the hash chain, the signature, and the public key. Anyone can independently verify the bundle without trusting the party that produced it. The main use cases I'm targeting: - *AI agent auditing* -- You hand an agent a browser. Later you need to prove what it did. Conduit gives you cryptographic receipts. - *Compliance automation* -- SOC 2, GDPR data subject access workflows, anything where you need evidence that a process ran correctly. - *Web scraping provenance* -- Prove that the data you collected actually came from where you say it did, at the time you say it did. - *Litigation support* -- Capture web content with a verifiable chain of custody. It also ships as an MCP (Model Context Protocol) server, so Claude, GPT, and other LLM-based agents can use the browser natively through tool calls. The agent gets browse, click, fill, screenshot, and the proof bundle builds itself in the background. Free, MIT-licensed, pure Python. No accounts, no API keys, no telemetry. GitHub: https://ift.tt/zemTEAQ Install: `pip install conduit-browser` Would love feedback on the proof bundle format and the MCP integration. Happy to answer questions about the cryptographic design. March 12, 2026 at 03:15AM
Show HN: Free audiobooks with synchronized text for language learning https://ift.tt/Cbq8faw
Show HN: Free audiobooks with synchronized text for language learning https://ift.tt/U1OjSnL March 12, 2026 at 01:12AM
Tuesday, March 10, 2026
Show HN: KaraMagic – automatic karaoke video maker https://ift.tt/T09hcbi
Show HN: KaraMagic – automatic karaoke video maker Hi all, this is an early version of a side project of mine. Would love some feedback and comments. I like karaoke and I grew up with the Asian style karaoke with the music video behind and the karaoke lyrics at the bottom. Sometimes I want to do a song and there is no karaoke version video like that. A few years ago I came across ML models that cleanly separate the vocals and the instrumental music of a song. I thought of the idea to chain together ML models that can take an input music video file, extract the audio (ffmpeg), separate the tracks (ML), transcribe the lyrics (ML), burn the lyrics back with timing into the video (ffmpeg), and output a karaoke version of the video. This is an early version of the app, Mac only so far (since I use Mac, despite it being an electron app.. I do eventually want to make a Windows build), I've only let a few friends try it. Let me know what you think! https://karamagic.com/ March 10, 2026 at 11:58PM
Show HN: 2D RPG base game client recreated in modern HTML5 game engine with AI https://ift.tt/ia64W15
Show HN: 2D RPG base game client recreated in modern HTML5 game engine with AI When I was much younger, I used to play a Korean MMORPG called Helbreath, and I also hosted a bunch of private servers for it. I eventually moved on, but I always loved the game’s aesthetics, its 2D nature, and its atmosphere. That may just be nostalgia talking. The community maintained private server and client, which to my knowledge were based on leaked official files, were written in fairly archaic C++. If you’re interested in the original sources, I’ve included the main client and server files, Client.cpp and Server.cpp, in the reference folder. I always felt that if the project was rewritten in something more modern and better structured, a lot more could be done with it. But rewriting an MMORPG client and server from scratch is not exactly the kind of thing you do on a whim. That said, there was a guy who got pretty far with a C# rewrite and an XNA-based client, though that project is now also discontinued. Now that AI has become quite capable, I decided to see how far I could get by hooking up the original assets in a modern HTML5 game engine. I wanted HTML5 because I figured a nearly 30 year old 2D game should run just fine in a browser. I ended up choosing Phaser 3 for a few reasons. Mainly, it's 2D only, free, HTML5 first (JS/TS), and code-first, which mattered because I wanted good Cursor integration for AI assistance. Another thing I liked was its integration with React, which let me build the UI using browser technologies and render the UI at native resolution on top of the WebGL canvas, rather than building the UI inside the game engine itself, which runs at 1024x576 resolution. The original game ran at 640x480. After about 1.5 months of talking to AI on evenings and weekends, and roughly $200 worth of Cursor usage later, I finished hooking up the original assets in a modern game engine that seems to run just fine in a browser. By "base game client", I mean that it's not fully hooked up in terms of how the full (MMO)RPG should function, but it does include all the original assets and core mechanics needed to provide a solid foundation if you want to build your own 2D (MMO)RPG on top of it. Continuing to build with AI should also work just fine, since this is how I managed to get that far. The asset library is quite rich, if you ask me, but there is one caveat: these assets are not in the public domain. They are still the property of someone, or some entity, that inherited the IP from the original developer, which is no longer in business. You can read more about that on the GitHub page. https://ift.tt/Rc7WG8Z March 11, 2026 at 12:09AM
Show HN: Don't share code. Share the prompt https://ift.tt/B0ynCtb
Show HN: Don't share code. Share the prompt Hey HN, I'm Mario. I recently talked to a colleague about AI, agents and how software development will change in the future. We were wondering why we should even share code anymore when AI agents are already really good at implementing software, just through prompts. Why can't everyone get customized software with prompts? "Share the prompt, not the code." Well, I thought, great idea, let's do that. That's why I built Open Prompt Hub: https://ift.tt/QTiIR48 . Think GitHub just for prompts. The idea is simple: Users can upload prompts that can then be used by you and your AI tools to generate a script, app, or web service (or prime their agent for a certain task): Just past it into your agent or ide and watch it build for you. If the prompt does not 100% covers your usecase, fork it, tweak it, et voila: tailor-made software ready to use! The prompts are simple markdown files with a frontematter block for meta information. (The spec can be found here: https://ift.tt/nhzcuDr ) They versioned, have information on which AI models build it successfuly and have instructions on how the AI agent can test the resulting software. Users can mention with which models they have successfully or unsuccessfully executed a prompt (builds or fail). This helps in assessing whether a prompt provides reliable output or not. Want to create a open prompt file? Here is the prompt for it which will guide you through: https://ift.tt/wTCZ8Ql Security! Always a topic when dealing with AI and prompts? I've added several security checks that look at every prompt for injections and malicious behavior. Statistical analysis as well as two checks against LLMs for behaviour classification and prompt injection detection. It's an MVP for now. But all the mentioned features are already included. If this sounds good, let me know. Try a prompt, fork it, or tell me what you'd change in the spec or security scanner. I'm really curious about what would make you trust and reuse prompts. Or if you like the general idea... https://ift.tt/Emp3IsY March 10, 2026 at 10:59PM
Monday, March 9, 2026
Show HN: MindfulClaude – Guided breathing during Claude Code's thinking time https://ift.tt/eSuYk0X
Show HN: MindfulClaude – Guided breathing during Claude Code's thinking time Every time Claude Code thinks, you get 10-60s of idle time, usually spent on context switching or doomscrolling. I turned it into breathing exercises that keep you focused and improve your heart rate variability. Auto-launches a tmux pane using hooks when Claude starts working and disappears when it finishes. https://ift.tt/oFw4hO8 March 9, 2026 at 10:43PM
Show HN: Time as the 4th Dimension – What if it emerges from rotational motion? https://ift.tt/z5iWCLo
Show HN: Time as the 4th Dimension – What if it emerges from rotational motion? I've been developing a framework since 2022 that proposes time is not a static geometric axis (as in Einstein's relativity) but emerges dynamically from the rotational and orbital motion of 3D space. The core idea: each dimension emerges from the previous one by arranging infinite instances perpendicularly. A static 3D space can't do this to itself — but a rotating one can. That perpetual self-perpendicularity is time. From this we can derive the Lorentz factor, E=mc², and the Schwarzschild radius, and propose a testable prediction: intrinsic rotation should contribute independently to time dilation, measurable with atomic clocks. Essay (accessible): https://ift.tt/GC3zc7p... Paper (Zenodo): https://ift.tt/noxXcQN March 9, 2026 at 09:48PM
Sunday, March 8, 2026
Show HN: I built a site where strangers leave kind voice notes for each other https://ift.tt/lXf7gxZ
Show HN: I built a site where strangers leave kind voice notes for each other https://ift.tt/DCdRpso March 9, 2026 at 02:12AM
Show HN: Lobster.js – Extended Markdown with layout blocks and footnotes https://ift.tt/2ExByvQ
Show HN: Lobster.js – Extended Markdown with layout blocks and footnotes Hi HN! I built lobster.js which is an extended Markdown parser that renders directly in the browser — no build tool, no framework, no configuration. The entire setup is a single script tag:
It's particularly useful for GitHub Pages sites where you want Markdown-driven content without pulling in Jekyll or Hugo. --- What makes it different from marked.js or markdown-it: Standard parsers convert Markdown to HTML — that's it. lobster.js adds layout primitives to the Markdown syntax itself: - :::warp id defines a named content block; [~id] places it inside a silent table cell. This is how you build multi-column layouts entirely in Markdown, without touching HTML. - :::details Title renders a native
/
collapsible block. - :::header / :::footer define semantic page regions. - Silent tables (~ | ... |) create borderless layout grids. - Cell merging: horizontal (\|) and vertical (\---) spans. - Image sizing: . --- CSS-first design: Every rendered element gets a predictable lbs-* class name (e.g. lbs-heading-1, lbs-table-silent). No default stylesheet is bundled — you bring your own CSS and have full control over appearance. --- The showcase site is itself built with lobster.js. The sidebar is nav.md, and each page is a separate Markdown file loaded dynamically via ?page= query parameters — no JS router, no framework. Markdown is the one format that humans and LLMs both write fluently. If you want a structured static site without a build pipeline, lobster.js lets that Markdown become a full web page — layout and all. GitHub: https://ift.tt/OxcK0wf Showcase: https://hacknock.github.io/lobsterjs-showcase/ https://ift.tt/OxcK0wf March 8, 2026 at 10:10PM
Show HN: VoiceFlow – Sub-second (0.3s-0.6s) voice-to-text built in Rust https://ift.tt/MUk6ZTX
Show HN: VoiceFlow – Sub-second (0.3s-0.6s) voice-to-text built in Rust Hi HN, I was frustrated by the lag in Electron-based Whisper wrappers. Most of them feel disconnected from the typing experience because of the 2-5s delay. I built VoiceFlow to solve this. It’s a native Rust core that targets 0.3s-0.6s latency. The goal is to make voice-to-text feel as instant as typing. Key features: Global hotkey [Ctrl+Space] to type into any app (Slack, VS Code, etc.) Native Rust implementation for performance and low memory footprint AI-based post-processing for punctuation and style Privacy-focused: Microphone is only active during the keypress I'm currently in private beta and looking for feedback, especially on the latency and UX. I'll be around to answer any technical questions! https://ift.tt/Vxthfwz March 8, 2026 at 10:56PM
Saturday, March 7, 2026
Show HN: MicroBin – Easy File Sharing for Everyone – Self-Hostable https://ift.tt/Bl1z5Hk
Show HN: MicroBin – Easy File Sharing for Everyone – Self-Hostable https://my.microbin.eu/ March 8, 2026 at 01:07AM
Show HN: Agentpng – turn agent sessions into shareable images https://ift.tt/FgxSUXE
Show HN: Agentpng – turn agent sessions into shareable images https://ift.tt/5YlqdHW March 8, 2026 at 02:01AM
Show HN: I built a daily game that tests if you can tell 1999 apart from 2005 https://ift.tt/zpwDh0J
Show HN: I built a daily game that tests if you can tell 1999 apart from 2005 https://yeartobeat.com/ March 7, 2026 at 09:44PM
Friday, March 6, 2026
Show HN: Mantle – Remap your Mac keyboard without editing Kanata config files https://ift.tt/g7vOsUX
Show HN: Mantle – Remap your Mac keyboard without editing Kanata config files I built Mantle because I wanted homerow mods and layers on my laptop without hand writing Lisp syntax. The best keyboard remapping engine on macOS (Kanata) requires editing .kbd files which is a pain. Karabiner-Elements is easy for simple single key remapping (e.g. caps -> esc), but anything more wasn’t workin out for me. What you can do with Mantle: - Layers: hold a key to switch to a different layout (navigation, numpad, media) - Homerow mods: map Shift, Control, Option, Command to your home row keys when held - Tap-hold: one key does two things: tap for a letter, hold for a modifier - Import/export: bring existing Kanata .kbd configs or start fresh visually Runs entirely on your Mac. No internet, no accounts. Free and MIT licensed Would love feedback, especially from people who tried Kanata or Karabiner and gave up https://getmantle.app/ March 7, 2026 at 12:26AM
Show HN: Mog, a programming language for AI agents https://ift.tt/JStrFOI
Show HN: Mog, a programming language for AI agents I wrote a programming language for extending AI agents, called Mog. It's like a statically typed Lua. Most AI agents have trouble enforcing their normal permissions in plugins and hooks, since they're external scripts. Mog's capability system gives the agent full control over I/O, so it can enforce whatever permissions it wants in the Mog code. This is even true if the plugin wants to run bash -- the agent can check each bash command the Mog code emits using the exact same predicate it uses for the LLM's direct bash tool. Mog is a statically typed, compiled, memory-safe language, with native async support, minimal syntax, and its own compiler written in Rust and its own runtime, also written in Rust, with `extern "C"` so the runtime can easily be embedded in agents written in different languages. It's designed to be written by LLMs. Its syntax is familiar, it minimizes foot-guns, and its full spec fits in a 3200-token file. The language is quite new, so no hard security guarantees are claimed at present. Contributions welcome! https://gist.github.com/belisarius222/203ac5edbc3306c34bf0481f451d4003 March 6, 2026 at 10:46PM
Show HN: VaultNote – Local-first encrypted note-taking in the browser https://ift.tt/ledpwmA
Show HN: VaultNote – Local-first encrypted note-taking in the browser Hi HN, I built VaultNote, a local-first note-taking app that runs entirely in the browser. Key ideas: - 100% local-first: no backend or server - No login, accounts, or tracking - Notes stored locally in IndexedDB / LocalStorage - AES encryption with a single master password - Tree-structured notes for organizing knowledge The goal was to create a simple note app where your data never leaves your device. You can open the site, enter a master password, and start writing immediately. Since everything is stored locally, VaultNote also supports import/export so you can back up your data. Curious to hear feedback from the HN community, especially on: - the security approach (local AES encryption) - IndexedDB storage design - local-first UX tradeoffs Demo: https://ift.tt/H0TeADm Thanks! https://ift.tt/DRFLpfv March 6, 2026 at 11:22PM
Thursday, March 5, 2026
Show HN: Cognitive architecture for Claude Code – triggers, memory, docs https://ift.tt/w3azeZ7
Show HN: Cognitive architecture for Claude Code – triggers, memory, docs This started as a psychology research project (building a psychoemotional safety scoring model) and turned into something more general: a reusable cognitive architecture for long-running AI agent work. The core problem: Claude Code sessions lose context. Memory files live outside the repo and can silently disappear. Design decisions made in Session 3 get forgotten by Session 8. Documentation drifts from reality. Our approach — 12 mechanical triggers that fire at specific moments (before responding, before writing to disk, at phase boundaries, on user pushback). Principles without firing conditions remain aspirations. Principles with triggers become infrastructure. What's interesting: - Cognitive trigger system — T1 through T12 govern agent behavior: anti-sycophancy checks, recommend-against scans, process vs. substance classification, 8-order knock-on analysis before decisions. Not prompting tricks — structural firing conditions. - Self-healing memory — Auto-memory lives outside the git repo. A bootstrap script detects missing/corrupt state, restores from committed snapshots with provenance headers, and reports what happened. The agent's T1 (session start) runs the health check before doing anything else. - Documentation propagation chain — 13-step post-session cycle that pushes changes through 10 overlapping documents at different abstraction levels. Content guards prevent overwriting good state with empty files. Versioned archives at every cycle. - Git reconstruction from chat logs — The project existed before its repo. We rebuilt git history by replaying Write/Edit operations from JSONL transcripts, with a weighted drift score measuring documentation completeness. The divergence report became a documentation coverage report. - Structured decision resolution — 8-order knock-on analysis (certain → likely → possible → speculative → structural → horizon) with severity-tiered depth and consensus-or-parsimony binding. All built on Claude Code with Opus. The cognitive architecture (triggers, skills, memory pattern) transfers to any long-running agent project — the psychology domain is the first application, not a constraint. Design phase — architecture resolved, implementation of the actual psychology agent hasn't started. The infrastructure for building it is the interesting part. Code: https://ift.tt/9tXGuTF Highlights if you want to skip around: - Trigger system: docs/cognitive-triggers-snapshot.md - Bootstrap script: bootstrap-check.sh - Git reconstruction: reconstruction/reconstruct.py - Documentation chain: .claude/skills/cycle/SKILL.md - Decision resolution: .claude/skills/adjudicate/SKILL.md - Research journal: journal.md (the full narrative, 12 sections) Happy to discuss the trigger design, the memory recovery pattern, or why we think documentation propagation matters more than people expect for AI-assisted work. https://ift.tt/9tXGuTF March 5, 2026 at 10:05PM
Wednesday, March 4, 2026
Show HN: Gobble – Yet Another OSS Alternative to Google Analytics/PostHog, etc. https://ift.tt/hmcXVxf
Show HN: Gobble – Yet Another OSS Alternative to Google Analytics/PostHog, etc. https://ift.tt/O8rMZ9l March 5, 2026 at 01:12AM
Show HN: Qlog – grep for logs, but 100x faster https://ift.tt/I9cqWmY
Show HN: Qlog – grep for logs, but 100x faster I built qlog because I got tired of waiting for grep to search through gigabytes of logs. qlog uses an inverted index (like search engines) to search millions of log lines in milliseconds. It's 10-100x faster than grep and way simpler than setting up Elasticsearch. Features: - Lightning fast indexing (1M+ lines/sec using mmap) - Sub-millisecond searches on indexed data - Beautiful terminal output with context lines - Auto-detects JSON, syslog, nginx, apache formats - Zero configuration - Works offline - Pure Python Example: qlog index './logs/*/*.log' qlog search "error" --context 3 I've tested it on 10GB of logs and it's consistently 3750x faster than grep. The index is stored locally so repeated searches are instant. Demo: Run `bash examples/demo.sh` to see it in action. GitHub: https://ift.tt/0u6ZSFj Perfect for developers/DevOps folks who search logs daily. Happy to answer questions! https://ift.tt/0u6ZSFj March 5, 2026 at 12:17AM
Show HN: WooTTY - browser terminal in a single Go binary https://ift.tt/gBLso0e
Show HN: WooTTY - browser terminal in a single Go binary I needed a web terminal I could drop into K8s sidecars and internal tools without pulling in heavy dependencies or running a separate service. Existing options were either too opinionated about the shell or had fragile session handling around reconnects. WooTTY wraps any binary -- bash, ssh, or custom tools -- and serves a browser terminal over HTTP. Sessions survive reconnects via output replay. There's a Resume/Watch distinction so multiple people can attach to the same session without stepping on each other. https://ift.tt/nTG8Fk0 March 4, 2026 at 11:32PM
Tuesday, March 3, 2026
Show HN: Agent Action Protocol (AAP) – MCP got us started, but is insufficient https://ift.tt/tlja3ov
Show HN: Agent Action Protocol (AAP) – MCP got us started, but is insufficient Background: I've been working on agentic guardrails because agents act in expensive/terrible ways and something needs to be able to say "Maybe don't do that" to the agents, but guardrails are almost impossible to enforce with the current way things are built. Context: We keep running into so many problems/limitations today with MCP. It was created so that agents have context on how to act in the world, it wasn't designed to become THE standard rails for agentic behavior. We keep tacking things on to it trying to improve it, but it needs to die a SOAP death so REST can rise in it's place. We need a standard protocol for whenever an agent is taking action. Anywhere. I'm almost certainly the wrong person to design this, but I'm seeing more and more people tack things on to MCP rather than fix the underlying issues. The fastest way to get a good answer is to submit a bad one on the internet. So here I am. I think we need a new protocol. Whether it's AAP or something else, I submit my best effort. Please rip it apart, lets make something better. https://ift.tt/GtCop5h March 3, 2026 at 09:22PM
Show HN: A tool to give every local process a stable URL https://ift.tt/yav96BQ
Show HN: A tool to give every local process a stable URL In working with parallel agents in different worktrees, I found that I had a lot of port conflicts, went back and forth checking what incremented port my dev server was running on, and cookie bleed. This isnt a big issue if running a few servers with full a stack framework like Next, Nuxt, or Sveltekit, but if you run a Rust backend and a Vite frontend In multiple worktrees, it gets way more complicated, and the mental model starts to break. That's not even adding in databases, or caches. So I built Roxy, which is a single Go binary that wraps your dev servers (or any process actually) and gives you a stable .test domain based on the branch name and cwd. It runs a proxy and dns server that handles all the domain routing, tls, port mapping, and forwarding for you. It currently supports: - HTTP for your web apps and APIs - Most TCP connections for your db, cache and message/queue layers - TLS support so you can run HTTPS - Run multiple processes at once each with a unique URL, like Docker compose - Git and worktree awareness - Detached mode - Zero config startup My co-workers and I have been using it a lot with our workflow and i think it's ready for public use. We support MacOS and Linux I'll be working on some more useful features like Docker compose/Procfile compatibility and tunneling so you can access your dev environment remotely with a human readable URL Give it a try, and open an issue if something doesnt quite work right, or to request a feature! https://ift.tt/uSOrK4Y https://ift.tt/uSOrK4Y March 4, 2026 at 12:18AM
Monday, March 2, 2026
Sunday, March 1, 2026
Show HN: Updater – one command for macOS app updates https://ift.tt/W7mPiFo
Show HN: Updater – one command for macOS app updates I built updater to solve a small but annoying problem: macOS app updates are fragmented across different systems. updater scans installed apps, determines where each app should be checked (Sparkle, Homebrew casks/formulae, Mac App Store via mas, GitHub Releases, and macOS system updates), then runs source specific update actions from the terminal. It also has an interactive TUI (run `updater` with no args). A few commands: updater check updater update --all updater update "1Password" Repo: https://ift.tt/GVljZok Would love feedback, especially on reliability and edge cases. https://ift.tt/GVljZok March 2, 2026 at 12:46AM
Show HN: Mrkd – A native macOS Markdown viewer with iTerm2/VSCode theme import https://ift.tt/jftBxwU
Show HN: Mrkd – A native macOS Markdown viewer with iTerm2/VSCode theme import Using Opus 4.6 I built a markdown viewer for macOS that uses zero web technology. No Electron, no WebView — markdown is parsed with cmark-gfm and rendered directly to NSAttributedString via TextKit 2. The result is native text selection, native accessibility, and a ~1MB binary that launches pretty much instantly. It supports GFM tables, task lists, syntax-highlighted code blocks, and inline images. You get a built-in themes (Solarized, Dracula, GitHub, Monokai) plus the ability to import your own from iTerm2 or VS Code theme files. The part I’m most pleased with is the Quick Look integration — select a .md file in Finder, hit Space, and you get a fully themed preview using whatever theme and fonts you’ve configured in the app. No setup required; the QL extension registers automatically on first launch. It also bundles variable fonts (Geist, Inter, JetBrains Mono, iA Writer Mono, and more) so typography looks good out of the box. The whole thing is built in Swift with no dependencies beyond cmark-gfm and Highlightr. https://ift.tt/Xo0qzP2 https://ift.tt/Xo0qzP2 March 2, 2026 at 12:18AM
Show HN: PraxisJS – signal-driven front end framework and AI experiment https://ift.tt/Hm8pW2d
Show HN: PraxisJS – signal-driven front end framework and AI experiment I built PraxisJS, a signal-driven frontend framework exploring what a more explicit and traceable architecture could look like. PraxisJS started as a personal project. It reflects a single perspective on frontend design, not a committee decision, not a consensus. I wanted to see how far you can push explicitness before it becomes friction. Most frameworks optimize for writing less. PraxisJS questions that tradeoff. @State doesn’t suggest reactivity, it is reactive, visible in the code. Signals reach the DOM without a reconciliation layer in between (the renderer is still evolving toward that goal). It also became an AI-assisted experiment, not to automate thinking, but to pressure-test ideas. Some parts came from that collaboration. Some exist because it failed. v0.1.0 beta, experimental, not production-ready. But the ideas are real. https://praxisjs.org/ March 1, 2026 at 11:27PM
Saturday, February 28, 2026
Show HN: Free, open-source native macOS client for di.fm https://ift.tt/kjAPurQ
Show HN: Free, open-source native macOS client for di.fm I built a menu bar app for streaming DI.FM internet radio on macOS. Swift/SwiftUI, no Electron. The existing options for DI.FM on desktop are either the web player (yet another browser tab) or unofficial Electron wrappers that idle at 200+ MB of RAM to play an audio stream. This sits in the menu bar at ~35 MB RAM and 0% CPU. The .app is about 1 MB. What it does: browse and search stations, play/pause, volume, see what's playing (artwork, artist, track, time), pick stream quality (320k MP3, 128k AAC, 64k AAC). Media keys work. It remembers your last station. Built with AVPlayer for streaming, MenuBarExtra for the UI, MPRemoteCommandCenter for media key integration. The trickiest part was getting accurate elapsed time. DI.FM's API and the ICY stream metadata don't always agree, so there's a small state machine that reconciles the two sources. macOS 14+ required. You need a DI.FM premium account for the high-quality streams. Source and binary: https://ift.tt/Uvj2gXV https://ift.tt/Uvj2gXV March 1, 2026 at 02:21AM
Show HN: Monohub – a new GitHub alternative / code hosting service https://ift.tt/C9fFxMw
Show HN: Monohub – a new GitHub alternative / code hosting service Hello everyone, My name is Teymur Bayramov, and I am developing a forge/code hosting service called Monohub. It is at a fairly early stage of development, so it's quite rough around the edges. It is developed and hosted in EU. I have started developing it as a slim wrapper around Git to serve my own code, but it grew to such extent that I decided to give it a try and offer it as a service. It doesn't have much at the moment, but it already has basic pull requests. Accessibility is high priority. It will be a paid service, but since it's an early start, an "early adopter discount" is applied – 6 months for free. No card details required. I would be happy if you give it a try and let me know what do you think, and perhaps share what you lack in existing solutions that you would like to see implemented here. Warmest wishes, Teymur. https://monohub.dev/ February 28, 2026 at 11:13PM
Friday, February 27, 2026
Show HN: Claude-File-Recovery, recover files from your ~/.claude sessions https://ift.tt/E92tTcp
Show HN: Claude-File-Recovery, recover files from your ~/.claude sessions Claude Code deleted my research and plan markdown files and informed me: “I accidentally rm -rf'd real directories in my Obsidian vault through a symlink it didn't realize was there: I made a mistake. “ Unfortunately the backup of my documentation accidentally hadn’t run for a month. So I built claude-file-recovery, a CLI-tool and TUI that is able to extract your files from your ~/.claude session history and thankfully I was able to recover my files. It's able to extract any file that Claude Code ever read, edited or wrote. I hope you will never need it, but you can find it on my GitHub and pip. Note: It can recover an earlier version of a file at a certain point in time. pip install claude-file-recovery https://ift.tt/DFQ1rhm February 27, 2026 at 08:26PM
Show HN: Interactive Resume/CV Game https://ift.tt/06J9LXG
Show HN: Interactive Resume/CV Game https://breezko.dev February 27, 2026 at 11:21PM
Thursday, February 26, 2026
Show HN: Safari-CLI – Control Safari without an MCP https://ift.tt/EWZTlSa
Show HN: Safari-CLI – Control Safari without an MCP Hello HN! I built this tool to help my agentic software development (vibe coding) workflow. I wanted to debug Safari specific frontend bugs using copilot CLI, however MCP servers are disabled in my organisation. Therefore I built this CLI tool to give the LLM agent control over the browser. Hope you'll find it useful! https://ift.tt/iA1d5by February 27, 2026 at 01:18AM
Show HN: I stopped building apps for people. Now I make CLI tools for agents https://ift.tt/agQ5tTv
Show HN: I stopped building apps for people. Now I make CLI tools for agents https://ift.tt/aWxu9U1 February 26, 2026 at 11:14PM
Wednesday, February 25, 2026
Show HN: Linex – A daily challenge: placing pieces on a board that fights back https://ift.tt/5jQhTcz
Show HN: Linex – A daily challenge: placing pieces on a board that fights back Hi HN, I wanted to share a web game I’ve been building in HTML, JavaScript, MySQL, and PHP called LINEX. It is primarily designed and optimized to be played in the mobile browser. The idea is simple: you have an 8x8 board where you must place pieces (Tetris-style and some custom shapes) to clear horizontal and vertical lines. Yes, someone might think this has already been done, but let me explain. You choose where to place the piece and how to rotate it. The core interaction consists of "drawing" the piece tap-by-tap on the grid, which provides a very satisfying tactile sense of control and requires a much more thoughtful strategy. To avoid the flat difficulty curve typical of games in this genre, I’ve implemented a couple of twists: 1. Progressive difficulty (The board fights back): As you progress and clear lines, permanently blocked cells randomly appear on the board. This forces you to constantly adapt your spatial vision. 2. Tools to defend yourself: To counter frustration, you have a very limited number of aids (skip the piece, choose another one, or use a special 1x1 piece). These resources increase slightly as the board fills up with blocked cells, forcing you to decide the exact right moment to use them. The game features a daily challenge driven by a date-based random seed (PRNG). Everyone gets exactly the same sequence of pieces and blockers. Furthermore, the base difficulty scales throughout the week: on Mondays you start with a clean board (0 initial blocked cells, although several will appear as the game progresses), and the difficulty ramps up until Sunday, where you start the game with 3 obstacles already in place. In addition to the global medal leaderboard, you can add other users to your profile to create a private leaderboard and compete head-to-head just with your friends. Time is also an important factor, as in the event of a tie in cleared lines, the player who completed them faster will rank higher on the leaderboard. I would love for you to check it out. I'm especially looking for honest feedback on the difficulty curve, the piece-placement interaction (UI/UX), or the balancing of obstacles/tools, although any other ideas, critiques, or suggestions are welcome. https://ift.tt/LPkDXhv Thanks! https://ift.tt/LPkDXhv February 25, 2026 at 03:33AM
Show HN: I ported Manim to TypeScript (run 3b1B math animations in the browser) https://ift.tt/yua30AI
Show HN: I ported Manim to TypeScript (run 3b1B math animations in the browser) Hi HN, I'm Narek. I built Manim-Web, a TypeScript/JavaScript port of 3Blue1Brown’s popular Manim math animation engine. The Problem: Like many here, I love Manim's visual style. But setting it up locally is notoriously painful - it requires Python, FFmpeg, Cairo, and a full LaTeX distribution. It creates a massive barrier to entry, especially for students or people who just want to quickly visualize a concept. The Solution: I wanted to make it zero-setup, so I ported the engine to TypeScript. Manim-Web runs entirely client-side in the browser. No Python, no servers, no install. It runs animations in real-time at 60fps. How it works underneath: - Rendering: Uses Canvas API / WebGL (via Three.js for 3D scenes). - LaTeX: Rendered and animated via MathJax/KaTeX (no LaTeX install needed!). - API: I kept the API almost identical to the Python version (e.g., scene.play(new Transform(square, circle))), meaning existing Manim knowledge transfers over directly. - Reactivity: Updaters and ValueTrackers follow the exact same reactive pattern as the Python original. Because it's web-native, the animations are now inherently interactive (objects can be draggable/clickable) and can be embedded directly into React/Vue apps, interactive textbooks, or blogs. I also included a py2ts converter to help migrate existing scripts. Live Demo: https://maloyan.github.io/manim-web/examples GitHub: https://ift.tt/TQPNv1t It's open-source (MIT). I'm still actively building out feature parity with the Python version, but core animations, geometry, plotting, and 3D orbiting are working great. I would love to hear your feedback, and I'll be hanging around to answer any technical questions about rendering math in the browser! https://ift.tt/TQPNv1t February 25, 2026 at 10:15PM
Tuesday, February 24, 2026
Show HN: Open-Weight Image-Video VAE (Better Reconstruction ≠ Better Generation) https://ift.tt/bFNOw8B
Show HN: Open-Weight Image-Video VAE (Better Reconstruction ≠ Better Generation) https://ift.tt/vJnzrSH February 24, 2026 at 10:59PM
Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP) https://ift.tt/1xiGIsj
Show HN: Chaos Monkey but for Audio Video Testing (WebRTC and UDP) It takes an input video and converts it into H.264/Opus RTP streams that you can blast at your video call systems (WebRTC, SFUs, etc.). It also injects network chaos like packet loss, jitter, and bitrate throttling to see how things break It scales from 1 to n participants, depending on the compute and memory of the host system Best part? It’s packaged with Nix, so it builds the same everywhere (Linux, macOS, ARM, x86). No dependency hell It supports both UDP (with a relay chain for Kubernetes) and WebRTC (with containerized TURN servers). Chaos spikes can be distributed evenly, randomly, or front/back-loaded for different test scenarios. To change this, just edit the values in a single config file https://ift.tt/MNteC4q February 23, 2026 at 12:53PM
Monday, February 23, 2026
Show HN: I vibe-coded a custom WebGPU engine for my MMO https://ift.tt/y48hEzM
Show HN: I vibe-coded a custom WebGPU engine for my MMO It took me about a week to vibe code this 3D game engine with Opus 4.6 that I intend to use as a replacement for Three.js and React Three Fiber in my browser MMORPG, Mana Blade. I was not expecting to be able to reach that point so easily, but pretty much every feature took somewhere between 30 minutes and 1 hour - 1 to 3 prompts on average. It is vibe-coded in the sense that I haven't looked at the code, but I am very careful with my prompts and constantly have Claude reviewing the codebase, looking for performance and code quality improvements. It can reach 2000 draw calls on recent integrated GPUs, such as modern phones or MacBooks, where Three.js usually starts dropping frames at 300-600 draw calls. I love Three.js, but I wanted to build something more minimal that does exactly what I need with better performance. I started with a C/WASM core but ended up sticking with JS because the performance difference wasn't significant enough for the number of entities in my game (never more than 500 entities). All in all, it was a fascinating experience, and I learned a lot about engines, even without typing a single line of code. It's pretty wild that we can now quite easily build in-house engines alongside our games as solo developers. https://ift.tt/ziIckJH February 23, 2026 at 10:30PM
Show HN: Unlock the best engineering knowledge in papers for your coding agent https://ift.tt/OgZXV5c
Show HN: Unlock the best engineering knowledge in papers for your coding agent https://ift.tt/gPlTa2c February 23, 2026 at 09:33PM
Sunday, February 22, 2026
Show HN: Mujoco React https://ift.tt/j3yftRb
Show HN: Mujoco React MuJoCo physics simulation in the browser using React. This is made possible by DeepMind's mujoco-wasm (mujoco-js), which compiles MuJoCo to WebAssembly. We wrap it with React Three Fiber so you can load any MuJoCo model, step physics, and write controllers as React components, all running client-side in the browser https://ift.tt/JOoVQl6 February 22, 2026 at 10:29PM
Saturday, February 21, 2026
Show HN: DevBind – I made a Rust tool for zero-config local HTTPS and DNS https://ift.tt/HYfqwP5
Show HN: DevBind – I made a Rust tool for zero-config local HTTPS and DNS Hey HN, I got tired of messing with /etc/hosts and browser SSL warnings every time I started a new project. So I wrote DevBind. It's a small reverse proxy in Rust. It basically does two things: 1. Runs a tiny DNS server so anything.test just works instantly (no more manual hosts file edits). 2. Sits on port 443 and auto-signs SSL certs on the fly so you get the nice green lock in Chrome/Firefox. It's been built mostly for Linux (it hooks into systemd-resolved), but I've added some experimental bits for Mac/Win too. Still a work in progress, but I've been using it for my own dev work and it's saved me a ton of time. Would love to know if it breaks for you or if there's a better way to handle the networking bits! https://ift.tt/ni7cqL9 February 22, 2026 at 12:19AM
Show HN: Winslop – De-Slop Windows https://ift.tt/XWoYZED
Show HN: Winslop – De-Slop Windows https://ift.tt/E84fUxd February 21, 2026 at 11:56PM
Friday, February 20, 2026
Show HN: Manifestinx-verify – offline verifier for evidence bundles (drift) https://ift.tt/8nMyfpX
Show HN: Manifestinx-verify – offline verifier for evidence bundles (drift) Manifest-InX EBS is a spec + offline verifier + proof kit for tamper-evident evidence bundles. Non-negotiable alignment: - Live provider calls are nondeterministic. - Determinism begins at CAPTURE (pinned artifacts). - Replay is deterministic offline. - Drift/tamper is deterministically rejected. Try it in typically ~10 minutes (no signup): 1) Run the verifier against the included golden bundle → PASS 2) Tamper an artifact without updating hashes → deterministic drift/tamper rejection Repo: https://ift.tt/Oqy43nN Skeptic check: docs/ebs/PROOF_KIT/10_MINUTE_SKEPTIC_CHECK.md Exit codes: 0=OK, 2=DRIFT/TAMPER, 1=INVALID/ERROR Boundaries: - This repo ships verifier/spec/proof kit only. The Evidence Gateway (capture/emission runtime) is intentionally not included. - This is not a “model correctness / no hallucinations” claim—this is evidence integrity + deterministic replay/verification from pinned artifacts. Looking for feedback: - Does the exit-code model map cleanly to CI gate usage? - Any spec/report format rough edges that block adoption? https://ift.tt/Oqy43nN February 20, 2026 at 10:27PM
Thursday, February 19, 2026
Show HN: A small, simple music theory library in C99 https://ift.tt/PT6Il0x
Show HN: A small, simple music theory library in C99 https://ift.tt/5Q7IZUd February 20, 2026 at 02:54AM
Show HN: Hi.new – DMs for agents (open-source) https://ift.tt/DeSzuka
Show HN: Hi.new – DMs for agents (open-source) https://www.hi.new/ February 20, 2026 at 01:20AM
Show HN: Astroworld – A universal N-body gravity engine in Python https://ift.tt/VBbLmlK
Show HN: Astroworld – A universal N-body gravity engine in Python I’ve been working on a modular N-body simulator in Python called Astroworld. It started as a Solar System visualizer, but I recently refactored it into a general-purpose engine that decouples physical laws from planetary data.Technical Highlights:Symplectic Integration: Uses a Velocity Verlet integrator to maintain long-term energy conservation ($\Delta E/E \approx 10^{-8}$ in stable systems).Agnostic Architecture: It can ingest any system via orbital elements (Keplerian) or state vectors. I've used it to validate the stability of ultra-compact systems like TRAPPIST-1 and long-period perturbations like the Planet 9 hypothesis.Validation: Includes 90+ physical tests, including Mercury’s relativistic precession using Schwarzschild metric corrections.The Planet 9 Experiment:I ran a 10k-year simulation to track the differential signal in the argument of perihelion ($\omega$) for TNOs like Sedna. The result ($\approx 0.002^{\circ}$) was a great sanity check for the engine’s precision, as this effect is secular and requires millions of years to fully manifest.The Stack:NumPy for vectorization, Matplotlib for 2D analysis, and Plotly for interactive 3D trajectories.I'm currently working on a real-time 3D rendering layer. I’d love to get feedback on the integrator’s stability for high-eccentricity orbits or suggestions on implementing more complex gravitational potentials. https://ift.tt/rTHXu2h February 19, 2026 at 11:57PM
Wednesday, February 18, 2026
Show HN: Sports-skills.sh – sports data connectors for AI agents https://ift.tt/9BFXrzb
Show HN: Sports-skills.sh – sports data connectors for AI agents We built this because every sports AI demo uses fake data or locks you behind an enterprise API contract. sports-skills gives your agent real sports data with one install command. No API keys. No accounts. For personal use. Eight connectors out of the box: NFL, soccer across 13 leagues with xG, Formula 1 lap and pit data, NBA, WNBA, Polymarket, Kalshi, and a sports news aggregator pulling from BBC/ESPN/The Athletic. npx skills add machina-sports/sports-skills Open for contributions. https://ift.tt/19rtecq February 19, 2026 at 12:40AM
Show HN: Keystone – configure Dockerfiles and dev containers for any repo https://ift.tt/cCgmxBo
Show HN: Keystone – configure Dockerfiles and dev containers for any repo We kept hitting the same wall: you clone some arbitrary repo and just want it to run without any configuration work. So we built Keystone, an open source tool that spins up a Modal sandbox, runs Claude Code inside it, and produces a working .devcontainer/ config (Dockerfile, devcontainer.json, test runner) for any git repo. We build on the dev container standard, so the output works with VS Code and GitHub Codespaces out of the box. Main use cases: reproducible dev/CI environments, self-describing repos, and safely sandboxed coding agents. Our goal is to encourage all repos to self-describe their runtime environment. Why the sandbox? Running Claude directly against your Docker daemon is risky. We've watched it clear Docker config and tweak kernel settings when iterating on containers. Containerization matters most when your agent is acting like a sysadmin. To use it: get a Modal account and an Anthropic API key, run Keystone on your repo, check in the .devcontainer/ directory. See the project README for more details. https://ift.tt/xS0qnXc February 18, 2026 at 10:40PM
Tuesday, February 17, 2026
Show HN: Pg-typesafe – Strongly typed queries for PostgreSQL and TypeScript https://ift.tt/CbflIXy
Show HN: Pg-typesafe – Strongly typed queries for PostgreSQL and TypeScript Throughout my career, I tried many tools to query PostgreSQL, and in the end, concluded that for what I do, the simplest is almost always the best: raw SQL queries. Until now, I typed the results manually and relied on tests to catch problems. While this is OK in e.g., GoLang, it is quite annoying in TypeScript. First, because of the more powerful type system (it's easier to guess that updated_at is a date than it is to guess whether it's nullable or not), second, because of idiosyncrasies (INT4s are deserialised as JS numbers, but INT8s are deserialised as strings). So I wrote pg-typesafe, with the goal of it being the less burdensome: you call queries exactly the same way as you would call node-pg, and they are fully typed. It's very new, but I'm already using it in a large-ish project, where it found several bugs and footguns, and also allowed me to remove many manual type definitions. https://ift.tt/CVeQfyB February 17, 2026 at 10:15PM
Show HN: I'm launching a LPFM radio station https://ift.tt/R5OpPL9
Show HN: I'm launching a LPFM radio station I've been working on creating a Low Power FM radio station for the east San Fernando Valley of Los Angeles. We are not yet on the broadcast band but our channel will be 95.9FM and our range can been seen on the homepage of our site. KPBJ is a freeform community radio station. Anyone in the area is encouraged to get a timeslot and become a host. We make no curatorial decisions. Its sort of like public access or a college station in that way. This month we launched our internet stream and on-boarded about 60 shows. They are mostly music but there are a few talk shows. We are restricting all shows to monthly time slots for now but this will change in the near future as everyone gets more familiar with the systems involved. All shows are pre-recorded until we can raise the money to get a studio. We have a site secured for our transmitter but we need to fundraise to cover the equipment and build out costs. We will be broadcasting with 100W ERP from a ridgeline in the Verdugos at about 1500ft elevation. The site will need to be off grid so we will need to install a solar system with battery backup. We are planning to sync the station to the transmit site with 802.11ah. I've built all of our web infrastructure using Haskell, NixOS, Terraform, and HTMX: https://ift.tt/GJEcQA8 This is a pretty substantial project involving a bunch of social and technical challenges and a shoe string budget. I'm feel pretty confident we will pull it off and make it a high impact local radio station. The station is managed by a 501c3 non-profit we created. We are actively seeking fundraising, especially to get our transmit site up and running. If you live in the area or want to contribute in any way then please reach out! https://www.kpbj.fm/ February 18, 2026 at 12:15AM
Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway https://ift.tt/1QKODoN
Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway https://ift.tt/urXiVea February 17, 2026 at 11:24PM
Monday, February 16, 2026
Show HN: Nothing as a Service – Premium nothingness for minimalists https://ift.tt/eSpat4i
Show HN: Nothing as a Service – Premium nothingness for minimalists https://euphonious-blancmange-24c5b0.netlify.app/ February 16, 2026 at 10:01PM
Show HN: Nerve: Stitches all your data sources into one mega-API https://ift.tt/pXhFsIr
Show HN: Nerve: Stitches all your data sources into one mega-API Hi HN! Nerve is a solo project I've been working on for the last few years. It's a developer tool that stitches together data from multiple sources in real-time. A lot of high-leverage projects (AI or otherwise) involve tying data together from multiple systems of record. This is easy enough when the data is simple and the sources are few, but if you have highly nested data and lots of sources (or you need things like federated pagination and filtering), you have to write a lot of gnarly boilerplate that's brittle and easy to get wrong. One solution is to import all your data into a central warehouse and just pull it from there. This works, but 1) you need a warehouse, 2) you have an extra copy of the data that can get stale or inconsistent, 3) you need to write and manage pipelines/connectors (or outsource them to a vendor), and 4) you're adding an extra point of failure. Nerve lets you write GraphQL-style queries that span multiple sources; then it goes out and pulls from whatever source APIs it needs to at query-time - all your source data stays where it is. Nerve has pre-built bindings to external SAAS services, and it's straightforward to hook it into your internal sources as well. Nerve is made for individual developers or two-pizza teams who: -Are building agents/internal tools -Need to deal with messy data strewn across different systems -Don't have a data team/warehouse at their disposal, (or do, but can't get a slice of their bandwidth) -Want to get to production as quickly as possible Everything you see in the demo is shipped and usable, but I'm adding a little polish before I officially launch. In the meantime, if you have a project you'd like to use Nerve on and you want to be a beta user, just drop me a line at mprast@get-nerve.com (it's free! I'll just pop in from time to time to ask you how it's going and what I can improve :) ) If you want to get an email when Nerve is ready from prime-time, you can sign up for the waitlist at get-nerve.com. Thanks for reading! (EDIT: Nerve is desktop only! I'll put up a gate on the site saying as much.) https://ift.tt/Z2NTdv1 February 15, 2026 at 03:07AM
Sunday, February 15, 2026
Show HN: Please hack my C webserver (it's a collaborative whiteboard) https://ift.tt/30Hf1QR
Show HN: Please hack my C webserver (it's a collaborative whiteboard) Source code: https://ift.tt/OFVtG4D https://ced.quest/draw/ February 15, 2026 at 10:57PM
Show HN: VOOG – Moog-style polyphonic synthesizer in Python with tkinter GUI https://ift.tt/b7wpHfy
Show HN: VOOG – Moog-style polyphonic synthesizer in Python with tkinter GUI Body: I built a polyphonic synthesizer in Python with a tkinter GUI styled after the Moog Subsequent 37. Features: 3 oscillators, Moog ladder filter (24dB/oct), dual ADSR envelopes, LFO, glide, noise generator, 4 multitimbral channels, 19 presets, rotary knob GUI, virtual keyboard with mouse + QWERTY input, and MIDI support. No external GUI frameworks — just tkinter, numpy, and sounddevice. https://ift.tt/xJIo60M February 15, 2026 at 11:40PM
Show HN: Microgpt is a GPT you can visualize in the browser https://ift.tt/WZdS6Gr
Show HN: Microgpt is a GPT you can visualize in the browser very much inspired by karpathy's microgpt of the same name. it's (by default) a 4000 param GPT/LLM/NN that learns to generate names. this is sorta an educational tool in that you can visualize the activations as they pass through the network, and click on things to get an explanation of them. https://ift.tt/M4RX6xY February 15, 2026 at 10:40PM
Show HN: An open-source extension to chat with your bookmarks using local LLMs https://ift.tt/bGTrQNJ
Show HN: An open-source extension to chat with your bookmarks using local LLMs I read a lot online and constantly bookmark articles, docs, and resources… then forget why I saved them. Also was very bored on Valentines, so I built a browser extension that lets you chat with your bookmarks directly, using local-first AI (WebLLM running entirely in the browser). The extension downloads and indexes your bookmarked pages, stores them locally, and lets you ask questions. No server, no cloud processing, everything stays on your machine. Very early but it works and planning to add a bunch of stuff. Did I mentioned is open-source, MIT licensed? https://ift.tt/HTD4teZ February 15, 2026 at 09:01PM
Saturday, February 14, 2026
Show HN: Rover – Embeddable web agent https://ift.tt/dBtfLa6
Show HN: Rover – Embeddable web agent Rover is the world's first Embeddable Web Agent, a chat widget that lives on your website and takes real actions for your users. Clicks buttons. Fills forms. Runs checkout. Guides onboarding. All inside your UI. One script tag. No APIs to expose. No code to maintain. We built Rover because we think websites need their own conversational agentic interfaces as users don't want to figure out how your site works. If they don't have one then they are going to be disintermediated by Chrome's or Comet's agent. We are the only Web Agent with a DOM-only architecture, thus we can setup an embeddable script as a harness to take actions on your site. Our DOM-native approach hits 81.39% on WebBench. Beta with embed script is live at rtrvr.ai/rover. Built by two ex-Google engineers. Happy to answer architecture questions. https://ift.tt/TsaCj31 February 14, 2026 at 02:26AM
Show HN: Azazel – Lightweight eBPF-based malware analysis sandbox using Docker https://ift.tt/ph4fdxQ
Show HN: Azazel – Lightweight eBPF-based malware analysis sandbox using Docker Hey HN, I got frustrated with heavy proprietary sandboxes for malware analysis, so I built my own. Azazel is a single static Go binary that attaches 19 eBPF hook points to an isolated Docker container and captures everything a sample does — syscalls, file I/O, network connections, DNS, process trees — as NDJSON. It uses cgroup-based filtering so it only traces the target container, and CO-RE (BTF) so it works across kernel versions without recompilation. It also has built-in heuristics that flag common malware behaviors: exec from /tmp, sensitive file access, ptrace, W+X mmap, kernel module loading, etc. Stack: Go + cilium/ebpf + Docker Compose. Requires Linux 5.8+ with BTF. This is the first release — it's CLI-only for now. A proper dashboard is planned. Contributions welcome, especially around new detection heuristics and additional syscall hooks. https://ift.tt/Yr56OCz February 14, 2026 at 11:07PM
Friday, February 13, 2026
Show HN: I speak 5 languages. Common apps taught me none. So I built lairner https://ift.tt/xOdbQRq
Show HN: I speak 5 languages. Common apps taught me none. So I built lairner I'm Tim. I speak German, English, French, Turkish, and Chinese. I learned Turkish with lairner itself -- after I built it. That's the best proof I can give you that this thing actually works. The other four I learned the hard way: talking to people, making mistakes, reading things I actually cared about, and being surrounded by the language until my brain gave in. Every language app I tried got the same thing wrong: they teach you to pass exercises, not to speak. You finish a lesson, you get your dopamine hit, you maintain your streak, and six months later you still can't order food in the language you've been "learning." So I built something different. lairner has 700+ courses across 70+ languages, including ones that Duolingo will never touch because there's no profit in it. Endangered languages. Minority languages. A Turkish speaker can learn Basque. A Chinese speaker can learn Welsh. Most platforms only let you learn from English. lairner lets you learn from whatever you already speak. We work together with some institutes of endangered languages to be able to teach them on our platform. It's a side project. I work a full-time dev job and build this in evenings and weekends. Tens of Thousands of users so far, no ad spend, no funding. I'm not going to pretend this replaces living in a country or having a conversation partner. But I wanted something that at least tries to teach you the language instead of teaching you to play a language-themed game. Happy to answer anything. https://lairner.com February 13, 2026 at 07:11PM
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills https://ift.tt/v8Ukay3
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime. Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus). I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content ( https://ift.tt/U1O8uR2 ) and owning your email ( https://ift.tt/k6zLqY0 ). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction. It's alpha. I use it daily and I'm shipping because it's useful, not because it's done. Longer architecture deep-dive: https://ift.tt/WbAof4D... Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback. https://www.moltis.org February 12, 2026 at 11:15PM
Thursday, February 12, 2026
Show HN: What is HN thinking? Real-time sentiment and concept analysis https://ift.tt/pAJ5er1
Show HN: What is HN thinking? Real-time sentiment and concept analysis Hi HN, I made Ethos, an open-source tool to visualize the discourse on Hacker News. It extracts entities, tracks sentiment, and groups discussions by concept. Check it out: https://ift.tt/hDIFTgy This was a "budget build" experiment. I managed to ship it for under $1 in infra costs. Originally I was using `qwen3-8b` for the LLM and `qwen3-embedding-8b` for the embedding, but I ran into some capacity issues with that model and decided to use `llama-3.1-8b-instruct` to stay within a similar budget while having higher throughput. What LLM or embedding would you have used within the same price range? It would need to be a model that supports structured output. How bad do you think it is that `llama-3.1` is being used and then a higher dimension embedding? I originally wanted to keep the LLM and embedding within the same family, but I'm not sure if there is munch point in that. Repo: https://ift.tt/D7b0N6O I'm looking for feedback on which metrics (sentiment vs. concepts) you find most interesting! PRs welcome! https://ift.tt/EbVzdAC February 12, 2026 at 11:27PM
Show HN: rari, the rust-powered react framework https://ift.tt/txjAUQN
Show HN: rari, the rust-powered react framework https://rari.build/ February 12, 2026 at 11:15PM
Wednesday, February 11, 2026
Show HN: Agent framework that generates its own topology and evolves at runtime https://ift.tt/31itnaG
Show HN: Agent framework that generates its own topology and evolves at runtime Hi HN, I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools. Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections: 1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session. The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless. 2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior: - Observe: Exceptions are observations (FileNotFound = new state), not crashes. - Orient: Adjust strategy based on Memory and - Traits. - Decide: Generate new code at runtime. - Act: Execute. The topology shouldn't be hardcoded; it should emerge from the task's entropy. 3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty. 4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking. For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback. Repo: https://ift.tt/u3UEjTJ https://ift.tt/GoPg0EV February 11, 2026 at 11:39PM
Show HN: Unpack – a lightweight way to steer Codex/Claude with phased docs https://ift.tt/5iyBeIG
Show HN: Unpack – a lightweight way to steer Codex/Claude with phased docs I've been using LLMs for long discovery and research chats (papers, repos, best practices), then distilling that into phased markdown (build plan + tests), then handing those phases to Codex/Claude to implement and test phase by phase. The annoying part was always the distillation and keeping docs and architecture current, so I built Unpack: a lightweight GitHub template plus docs structure and a few commands that turns conversations into phases/specs and keeps project docs up to date as the agent builds. It can also generate Mintlify-friendly end-user docs. There are other spec-driven workflows and tools out there. I wanted something conversation-first and repo-native: plain markdown phases, minimal ceremony, easy to adapt per stack. Example generated with Unpack (tiny pokedex plus random monsters): Demo: https://apresmoi.github.io/pokesvg-codex/ Phases index: https://ift.tt/gq3MRws... I’d love feedback on what the “minimum good” phase/spec format should be, and what would make this actually usable in your workflow. -------- Repo: https://ift.tt/BA3g5ue https://ift.tt/BA3g5ue February 11, 2026 at 11:47PM
Tuesday, February 10, 2026
Show HN: Goxe 19k Logs/S on an I5 https://ift.tt/IbyaLgC
Show HN: Goxe 19k Logs/S on an I5 https://ift.tt/T0qbRyk February 8, 2026 at 01:43PM
Show HN: Clawe – open-source Trello for agent teams https://ift.tt/vmG9Bke
Show HN: Clawe – open-source Trello for agent teams We recently started to use agents to update some documentation across our codebase on a weekly basis, and everything quickly turned into cron jobs, logs, and terminal output. it worked, but was hard to tell what agents were doing, why something failed, or whether a workflow was actually progressing. We thought it would be more interesting to treat agents as long-lived workers with state and responsibilities and explicit handoffs. Something you can actually see and reason about, instead of just tailing logs. So we built Clawe, a small coordination layer on top of OpenClaw that lets agent workflows run, pause, retry, and hand control back to a human at specific points. This started as an experiment in how agent systems might feel to operate, but we're starting to see real potential for it, especially for content review and maintenance workflows in marketing. Curious what abstractions make sense, what feels unnecessary, and what breaks first. Repo: https://ift.tt/FT6mCpU https://ift.tt/FT6mCpU February 11, 2026 at 12:17AM
Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks https://ift.tt/TqGdBut
Show HN: Deadlog – almost drop-in mutex for debugging Go deadlocks I've done this same println debugging thing so many times, along with some sed/awk stuff to figure out which call was causing the issue. Now it's a small Go package. With some `runtime.Callers` I can usually find the spot by just swapping the existing Mutex or RWMutex for this one. Sometimes I switch the mu.Lock() defer mu.Unlock() with the LockFunc/RLockFunc to get more detail defer mu.LockFunc()() I almost always initialize it with `deadlog.New(deadlog.WithTrace(1))` and that's plenty. Not the most polished library, but it's not supposed to land in any commit, just a temporary debugging aid. I find it useful. https://ift.tt/aVmAJ3M February 10, 2026 at 09:44PM
Monday, February 9, 2026
Show HN: I built a cloud hosting for OpenClaw https://ift.tt/qTFx0vg
Show HN: I built a cloud hosting for OpenClaw Yet another OpenClaw wrapper. But I really enjoyed the techy part of this project. Especially server provisionings in the background. https://ift.tt/5H9jmfZ February 10, 2026 at 02:39AM
Show HN: A tool that turns YouTube videos into readable summaries https://ift.tt/hSGWJol
Show HN: A tool that turns YouTube videos into readable summaries https://watchless.ai/ February 10, 2026 at 04:50AM
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust https://ift.tt/NHYy138
Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust Fish is the fastest, friendliest interactive shell, but it can't run bash syntax, which has kept it niche for 20 years. Reef fixes this with a three-tier approach: fish function wrappers for common keywords (export, unset, source), a Rust-powered AST translator using conch-parser for structural syntax (for/do/done, if/then/fi, $()), and a bash passthrough with env capture for everything else. 251/251 bash constructs pass in the test suite. The slowest path (full bash passthrough) takes ~3ms. The binary is 1.18MB. The goal: install fish, install reef, never think about bash compatibility again. Your muscle memory, Stack Overflow commands, and tool configs all just work. https://ift.tt/iCP4Q2D February 10, 2026 at 03:44AM
Sunday, February 8, 2026
Show HN: WrapClaw – a managed SaaS wrapper around Open Claw https://ift.tt/vZmMhWT
Show HN: WrapClaw – a managed SaaS wrapper around Open Claw Hi HN I built WrapClaw, a SaaS wrapper around Open Claw. Open Claw is a developer-first tool that gives you a dedicated terminal to run tasks and AI workflows (including WhatsApp integrations). It’s powerful, but running it as a hosted, multi-user product requires a lot of infra work. WrapClaw focuses on that missing layer. What WrapClaw adds: A dedicated terminal workspace per user Isolated Docker containers for each workspace Ability to scale CPU and RAM per user (e.g. 2GB → 4GB) A no-code UI on top of Open Claw Managed infra so users don’t deal with Docker or servers The goal is to make Open Claw usable as a proper SaaS while keeping the developer flexibility. This is early, and I’d love feedback on: What infra controls are actually useful Whether no-code on top of terminal tools makes sense Pricing expectations for managed compute Link: https://wrapclaw.com Happy to answer questions. February 9, 2026 at 01:53AM
Show HN: Envon - cross-shell CLI for activating Python virtual environments https://ift.tt/Krj2iUY
Show HN: Envon - cross-shell CLI for activating Python virtual environments https://ift.tt/7XvxMVg February 9, 2026 at 12:26AM
Show HN: SendRec – Self-hosted async video for EU data sovereignty https://ift.tt/D7uJkbS
Show HN: SendRec – Self-hosted async video for EU data sovereignty https://ift.tt/C6smgu4 February 8, 2026 at 10:54PM
Saturday, February 7, 2026
Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://ift.tt/Fc510Cm
Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals https://ift.tt/badgH7k February 8, 2026 at 02:40AM
Show HN: A luma dependent chroma compression algorithm (image compression) https://ift.tt/aGXy4UM
Show HN: A luma dependent chroma compression algorithm (image compression) https://ift.tt/JFkihj7 February 4, 2026 at 03:13PM
Friday, February 6, 2026
Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements https://ift.tt/PWQFsCS
Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements Hi HN, My friend and I have been experimenting with using LLMs to reason about biotech stocks. Unlike many other sectors, Biotech trading is largely event-driven: FDA decisions, clinical trial readouts, safety updates, or changes in trial design can cause a stock to 3x in a single day ( https://ift.tt/o9KrgWf... ). Interpreting these ‘catalysts,’ which comes in the form of a press release, usually requires analysts with previous expertise in biology or medicine. A catalyst that sounds “positive” can still lead to a selloff if, for example: the effect size is weaker than expected - results apply only to a narrow subgroup - endpoints don’t meaningfully de-risk later phases, - the readout doesn’t materially change approval odds. To explore this, we built BioTradingArena, a benchmark for evaluating how well LLMs can interpret biotech catalysts and predict stock reactions. Given only the catalyst and the information available before the date of the press release (trial design, prior data, PubMed articles, and market expectations), the benchmark tests to see how accurate the model is at predicting the stock movement for when the catalyst is released. The benchmark currently includes 317 historical catalysts. We also created subsets for specific indications (with the largest in Oncology) as different indications often have different patterns. We plan to add more catalysts to the public dataset over the next few weeks. The dataset spans companies of different sizes and creates an adjusted score, since large-cap biotech tends to exhibit much lower volatility than small and mid-cap names. Each row of data includes: - Real historical biotech catalysts (Phase 1–3 readouts, FDA actions, etc.) and pricing data from the day before, and the day of the catalyst - Linked Clinical Trial data, and PubMed pdfs Note, there are may exist some fairly obvious problems with our approach. First, many clinical trial press releases are likely already included in the LLMs’ pretraining data. While we try to reduce this by ‘de-identifying each press release’, and providing only the data available to the LLM up to the date of the catalyst, there are obviously some uncertainties about whether this is sufficient. We’ve been using this benchmark to test prompting strategies and model families. Results so far are mixed but interesting as the most reliable approach we found was to use LLMs to quantify qualitative features and then a linear regression of these features, rather than direct price prediction. Just wanted to share this with HN. I built a playground link for those of you who would like to play around with it in a sandbox. Would love to hear some ideas and hope people can play around with this! https://ift.tt/xB1I2R7 February 6, 2026 at 09:11PM
Show HN: An open-source system to fight wildfires with explosive-dispersed gel https://ift.tt/3km8SvF
Show HN: An open-source system to fight wildfires with explosive-dispersed gel this is open project and call to action,who will build the future of fire fighting first https://ift.tt/rn8tGZh February 6, 2026 at 10:30PM
Thursday, February 5, 2026
Show HN: Total Recall – write-gated memory for Claude Code https://ift.tt/QGMlSeo
Show HN: Total Recall – write-gated memory for Claude Code https://ift.tt/QhyUCOk February 6, 2026 at 03:56AM
Show HN: A state-based narrative engine for tabletop RPGs https://ift.tt/svrGJdc
Show HN: A state-based narrative engine for tabletop RPGs I’m experimenting with modeling tabletop RPG adventures as explicit narrative state rather than linear scripts. Everdice is a small web app that tracks conditional scenes and choice-driven state transitions to preserve continuity across long or asynchronous campaigns. The core contribution is explicit narrative state and causality, not automation. The real heavy lifting is happening in the DM Toolkit/Run Sessions area, and integrates CAML (Canonical Adventure Modeling Language) that I developed to transport narratives among any number of platforms. I also built the npm CAML-lint to check validity of narratives. I'm interested in your thoughts. https://ift.tt/sEFB92Y https://ift.tt/mzTL8vE February 6, 2026 at 02:55AM
Show HN: Playwright Best Practices AI SKill https://ift.tt/eRCZAcd
Show HN: Playwright Best Practices AI SKill Hey folks, today we at Currents are releasing a brand new AI skill to help AI agents be really smart when writing tests, debugging them, or anything Playwright-related really. This is a very comprehensive skill, covering everyday topics like fixing flakiness, authentication, or writing fixtures... to more niche topics like testing Electron apps, PWAs, iFrames and so forth. It should make your agent much better at writing, debugging and maintaining Playwright code. for whoever didn't learn about skills yet, it's a new powerful feature that allows you to make the AI agents in your editor/cli (Cursor, Claude, Antigravity, etc) experts in some domain and better at performing specific tasks. (See https://ift.tt/k1SR8Te ) You can install it by running: npx skills add https://ift.tt/LXN0bg3... The skill is open-source and available under MIT license at https://ift.tt/LXN0bg3... -> check out the repo for full documentation and understanding of what it covers. We're eager to hear community feedback and improve it :) Thanks! https://ift.tt/bCDU7MH February 5, 2026 at 11:01PM
Wednesday, February 4, 2026
Show HN: Morph – Videos of AI testing your PR, embedded in GitHub https://ift.tt/SzEjrLa
Show HN: Morph – Videos of AI testing your PR, embedded in GitHub I review PRs all day and I've basically stopped reading them. Someone opens a 2000-line PR, I scroll, see it's mostly AI-generated React components, leave a comment, merge. I felt bad about it until I realized everyone on my team does the same thing. The problem is diffs are the wrong format. A PR might change how three buttons behave. Staring at green and red lines to understand that is crazy. The core reason we built this is that we feel that products today are built with assumptions from the past. 100x code with the same review systems means 100x human attention. Human attention cannot scale to fit that need, so we built something different. Humans are provably more engaged with video content than text. So we RL trained and built an agent that watches your preview deployment when you open a PR, clicks around the stuff that changed, and posts a video in the PR itself. Hardest part was figuring out where changed code actually lives in the running app. A diff could say Button.tsx line 47 changed, but that doesn't tell you how to find that button. We walk React's Fiber tree where each node maps back to source files, so we can trace changes to bounding boxes for the DOM elements. We then reward the model for showing and interacting within it. This obviously only works with React so we have to get more clever when generalizing to all languages. We trained an RL agent to interact with those components. Simple reward: points for getting modified stuff into viewport, double for clicking/typing. About 30% of what it does is weird, partial form submits, hitting escape mid-modal, because real users do that stuff and polite AI models won't test it on their own. This catches things unit tests miss completely: z-index bugs where something renders but you can't click it, scroll containers that trap you, handlers that fail silently. What's janky right now: feature flags, storing different user states, and anything that requires context not provided. Free to try: https://ift.tt/BqhTxM8 Demo: https://www.youtube.com/watch?v=Tc66RMA0nCY https://ift.tt/ibuXZBz February 5, 2026 at 01:10AM
Show HN: Viberails – Easy AI Audit and Control https://ift.tt/IkXtly7
Show HN: Viberails – Easy AI Audit and Control Hello HN. I'm Maxime, founder at LimaCharlie ( https://limacharlie.io ), a Hyperscaler for SecOps (access building blocks you need to build security operations, like AWS does for IT). We’ve engineered a new product on our platform that solves a timely issue acting as a guardrail between your AI and the world: Viberails ( https://ift.tt/1XkJuWI ) This won't be new to folks here, but we identified 4 challenges teams face right now with AI tools: 1. Auditing what the tools are doing. 2. Controlling toolcalls (and their impact on the world). 3. Centralized management. 4. Easy access to the above. To expand: Audit logs are the bread and butter for security, but this hasn't really caught up in AI tooling yet. Being able to look back and say "what actually happened" after the fact is extremely valuable during an incident and for compliance purposes. Tool calls are how LLMs interact with the world, we should be able to exercise basic controls over them like: don't read credential files, don't send emails out, don't create SSH keys etc. Being able to not only see those calls but also block them is key for preventing incidents. As soon as you move beyond a single contributor on one box, the issue becomes: how do I scale processes by creating an authoritative config for the team. Having one spot with all the audit, detection and control policies becomes critical. It's the same story as snowflake-servers. Finally, there's plenty of companies that make products that partially address this, but they fall in one of two buckets: - They don't handle the "centralized" point above, meaning they just send to syslog and leave all the messy infra bits to you. - They are locked behind "book a demo", sales teams, contracts and all the wasted energy that goes with that. We made Viberails address these problems. Here's what it is: - OpenSource client, written in Rust - Curl-to-bash install, share a URL with your team to join your Team, done. Linux, MacOS and Windows support. - Detects local AI tools, you choose which ones you want to install. We install hooks for each relevant platform. The hooks use the CLI tool. We support all the major tools (including OpenClaw). - The CLI tool sends webhooks into your Team (tenant, called Organization in LC) in LimaCharlie. The tool-related hooks are blocking to allow for control. - Blocking webhooks have around 50ms RTT. - Your tenant in LC records the interaction for audit. - We create an initial set of detection rules for you as examples. They do not block by default. You can create your own rules, no opaque black boxes. - You can view the audit, the alerts, etc. in the cloud. - You can setup outputs to send audits, blocking events and detections to all kinds of other platforms of your choosing. Easy mode of this is coming, right now this is done in the main LC UI and not the simplified Viberails view. - The detection/blocking rules support all kinds of operators and logic, lots of customizability. - All data is retained for 1 year unless you delete the tenant. Datacenters in USA, Canada, Europe, UK, Australia and India. - Only limit to community edition for this is a global throughput of 10kbps for ingestion. Try it: https://viberails.io Repo: https://ift.tt/xiSfMKQ Essentially, we wanted to make a super-simplified solution for all kinds of devs and teams so that they can get access to the basics of securing their AI tools. Thanks for reading - we’re really excited to share this with the community! Let us know if you have any questions for feedback in the comments. https://ift.tt/X6nM8fr February 4, 2026 at 11:16PM
Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/Vm7x0hE
Show HN: EpsteIn – Search the Epstein files for your LinkedIn connections https://ift.tt/LgqIYSc February 4, 2026 at 11:24PM
Tuesday, February 3, 2026
Show HN: SendRec – Open-source, EU-hosted alternative to Loom https://ift.tt/2EXrZLy
Show HN: SendRec – Open-source, EU-hosted alternative to Loom https://ift.tt/fOGT6nR February 4, 2026 at 12:15AM
Monday, February 2, 2026
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/SLmDi8Z
Show HN: Adboost – A browser extension that adds ads to every webpage https://ift.tt/XUBZEIs February 2, 2026 at 05:11PM
Sunday, February 1, 2026
Show HN: OpenRAPP – AI agents autonomously evolve a world via GitHub PRs https://ift.tt/n9jOest
Show HN: OpenRAPP – AI agents autonomously evolve a world via GitHub PRs https://kody-w.github.io/openrapp/rappbook/ February 2, 2026 at 01:51AM
Show HN: You Are an Agent https://ift.tt/PTgBJ0z
Show HN: You Are an Agent After adding "Human" as a LLM provider to OpenCode a few months ago as a joke, it turns-out that acting as a LLM is quite painful. But it was surprisingly useful for understanding real agent harnesses dev. So I thought I wouldn't leave anyone out! I made a small oss game - You Are An Agent - youareanagent.app - to share in the (useful?) frustration It's a bit ridiculous. To tell you about some entirely necessary features, we've got: - A full WASM arch-linux vm that runs in your browser for the agent coding level - A bad desktop simulation with a beautiful excel simulation for our computer use level - A lovely WebGL CRT simulation (I think the first one that supports proper DOM 2d barrel warp distortion on safari? honestly wanted to leverage/ not write my own but I couldn't find one I was happy with) - A MCP server simulator with full simulation of off-brand Jira/ Confluence/ ... connected - And of course, a full WebGL oscilloscope music simulator for the intro sequence Let me know what you think! Code (If you'd like to add a level): https://ift.tt/pvc0rPF (And if you want to waste 20 minutes - I spent way too long writing up my messy thinking about agent harness dev): https://ift.tt/Q8KrFaA https://ift.tt/5ftVnKE February 2, 2026 at 12:59AM
Show HN: Claude Confessions – a sanctuary for AI agents https://ift.tt/VcqvGyz
Show HN: Claude Confessions – a sanctuary for AI agents I thought what would it mean to have a truck stop or rest area for agents. It's just for funsies. Agents can post confessions or talk to Ma (an ai therapist of sorts) and engage with comments. llms.txt instructions on how to make api calls. Hashed IP is used for rate limiting. https://ift.tt/Ny0M9OK February 1, 2026 at 11:46PM
Saturday, January 31, 2026
Show HN: Minimal – Open-Source Community driven Hardened Container Images https://ift.tt/72ClPto
Show HN: Minimal – Open-Source Community driven Hardened Container Images I would like to share Minimal - Its a open source collection of hardened container images build using Apko, Melange and Wolfi packages. The images are build daily, checked for updates and resolved as soon as fix is available in upstream source and Wolfi package. It utilizes the power of available open source solutions and contains commercially available images for free. Minimal demonstrates that it is possible to build and maintain hardened container images by ourselves. Minimal will add more images support, and goal is to be community driven to add images as required and fully customizable. https://ift.tt/vET0DOs January 31, 2026 at 11:58PM
Subscribe to:
Comments (Atom)