Saturday, January 31, 2026
Show HN: Minimal – Open-Source Community driven Hardened Container Images https://ift.tt/72ClPto
Show HN: Minimal – Open-Source Community driven Hardened Container Images I would like to share Minimal - Its a open source collection of hardened container images build using Apko, Melange and Wolfi packages. The images are build daily, checked for updates and resolved as soon as fix is available in upstream source and Wolfi package. It utilizes the power of available open source solutions and contains commercially available images for free. Minimal demonstrates that it is possible to build and maintain hardened container images by ourselves. Minimal will add more images support, and goal is to be community driven to add images as required and fully customizable. https://ift.tt/vET0DOs January 31, 2026 at 11:58PM
Show HN: An extensible pub/sub messaging server for edge applications https://ift.tt/Q7UCsfh
Show HN: An extensible pub/sub messaging server for edge applications hi there! i’ve been working on a project called Narwhal, and I wanted to share it with the community to get some valuable feedback. what is it? Narwhal is a lightweight Pub/Sub server and protocol designed specifically for edge applications. while there are great tools out there like NATS or MQTT, i wanted to build something that prioritizes customization and extensibility. my goal was to create a system where developers can easily adapt the routing logic or message handling pipeline to fit specific edge use cases, without fighting the server's defaults. why Rust? i chose Rust because i needed a low memory footprint to run efficiently on edge devices (like Raspberry Pis or small gateways), and also because I have a personal vendetta against Garbage Collection pauses. :) current status: it is currently in Alpha. it works for basic pub/sub patterns, but I’d like to start working on persistence support soon (so messages survive restarts or network partitions). i’d love for you to take a look at the code! i’m particularly interested in all kind of feedback regarding any improvements i may have overlooked. https://ift.tt/l730obp January 28, 2026 at 05:59PM
Friday, January 30, 2026
Show HN: Daily Cat https://ift.tt/uPnyOkY
Show HN: Daily Cat Seeing HTTP Cats on the home page remind me to share a small project I made a couple months ago. It displays a different cat photo from Unsplash every day and will send you notifications if you opt-in. https://daily.cat/ January 31, 2026 at 02:10AM
Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. Infinite Memory https://ift.tt/chm0ua5
Show HN: A Local OS for LLMs. MIT License. Zero Hallucinations. Infinite Memory The problem with LLMs isn't intelligence; it's amnesia and dishonesty. Hey HN, I’ve spent the last few months building Remember-Me, an open-source "Sovereign Brain" stack designed to run entirely offline on consumer hardware. The core thesis is simple: Don't rent your cognition. Most RAG (Retrieval Augmented Generation) implementations are just "grep for embeddings." They are messy, imprecise, and prone to hallucination. I wanted to solve the "Context integrity" problem at the architectural layer. The Tech Stack (How it works): QDMA (Quantum Dream Memory Architecture): instead of a flat vector DB, it uses a hierarchical projection engine. It separates "Hot" (Recall) from "Cold" (Storage) memory, allowing for effectively infinite context window management via compression. CSNP (Context Switching Neural Protocol) - The Hallucination Killer: This is the most important part. Every memory fragment is hashed into a Merkle Chain. When the LLM retrieves context, the system cryptographically verifies the retrieval against the immutable ledger. If the hash doesn't match the chain: The retrieval is rejected. Result: The AI visually cannot "make things up" about your past because it is mathematically constrained to the ledger. Local Inference: Built on top of llama.cpp server. It runs Llama-3 (or any GGUF) locally. No API keys. No data leaving your machine. Features: Zero-Dependency: Runs on Windows/Linux with just Python and a GPU (or CPU). Visual Interface: Includes a Streamlit-based "Cognitive Interface" to visualize memory states. Open Source: MIT License. This is an attempt to give "Agency" back to the user. I believe that if we want AGI, it needs to be owned by us, not rented via an API. Repository: https://ift.tt/FqbcGmV I’d love to hear your feedback on the Merkle-verification approach. Does constraining the context window effectively solve the "trust" issue for you? It's fully working - Fully tested. If you tried to Git Clone before without luck - As this is not my first Show HN on this - Feel free to try again. To everyone who HATES AI slop; Greedy corporations and having their private data stuck on cloud servers. You're welcome. Cheers, Mohamad https://ift.tt/FqbcGmV January 31, 2026 at 12:14AM
Show HN: We added memory to Claude Code. It's powerful now https://ift.tt/TOVgjcC
Show HN: We added memory to Claude Code. It's powerful now https://ift.tt/mhdau9z January 30, 2026 at 09:23PM
Thursday, January 29, 2026
Show HN: Craft – Claude Code running on a VM with all your workplace docs https://ift.tt/aBOxVe5
Show HN: Craft – Claude Code running on a VM with all your workplace docs I’ve found coding agents to be great at 1/ finding everything they need across large codebases using only bash commands (grep, glob, ls, etc.) and 2/ building new things based on their findings (duh). What if, instead of a codebase, the files were all your workplace docs? There was a `Google_Drive` folder, a `Linear` folder, a `Slack` folder, and so on. Over the last week, we put together Craft to test this out. It’s an interface to a coding agent (OpenCode for model flexibility) running on a virtual machine with: 1. your company's complete knowledge base represented as directories/files (kept in-sync) 2. free reign to write and execute python/javascript 3. ability to create and render artifacts to the user Demo: https://www.youtube.com/watch?v=Hvjn76YSIRY Github: https://ift.tt/VReITd2... It turns out OpenCode does a very good job with docs. Workplace apps also have a natural structure (Slack channels about certain topics, Drive folders for teams, etc.). And since the full metadata of each document can be written to the file, the LLM can define arbitrarily complex filters. At scale, it can write and execute python to extract and filter (and even re-use the verified correct logic later). Put another way, bash + a file system provides a much more flexible and powerful interface than traditional RAG or MCP, which today’s smarter LLMs are able to take advantage of to great effect. This comes especially in handy for aggregation style questions that require considering thousands (or more) documents. Naturally, it can also create artifacts that stay up to date based on your company docs. So if you wanted “a dashboard to check realtime what % of outages were caused by each backend service” or simply “slides following XYZ format covering the topic I’m presenting at next week’s dev knowledge sharing session”, it can do that too. Craft (like the rest of Onyx) is open-source, so if you want to run it locally (or mess around with the implementation) you can. Quickstart guide: https://ift.tt/dECSzUP Or, you can try it on our cloud: https://ift.tt/v5AnXrV (all your data goes on an isolated sandbox). Either way, we’ve set up a “demo” environment that you can play with while your data gets indexed. Really curious to hear what y’all think! January 29, 2026 at 07:45PM
Wednesday, January 28, 2026
Show HN: Pinecone Explorer – Desktop GUI for the Pinecone vector database https://ift.tt/wLazQMI
Show HN: Pinecone Explorer – Desktop GUI for the Pinecone vector database https://ift.tt/rOusS98 https://ift.tt/DVqBEZM January 28, 2026 at 05:06AM
Show HN: Config manager for Claude Code (and others) – rules, MCPs, permissions https://ift.tt/gap2VGd
Show HN: Config manager for Claude Code (and others) – rules, MCPs, permissions I use Claude Code across multiple projects with different conventions and some shared repos just as it so happens to be the real world. Managing the config files (.claude/rules/, mcps.json, settings.json) by hand got tedious, so I built a local web UI for it. This one started out as claude-config but migrated to coder-config as I'm adding others (Gemini, AG, Codex, etc). Main features: - Visual editor for rules, permissions, and MCP servers - Project registry to switch between codebases - "Workstreams" to group related repos (frontend + API + shared libs) with shared context - Auto-load workstreams on cd to included folders - Also supports Gemini CLI and Codex CLI Install: npm install -g coder-config coder-config ui # UI at http://localhost:3333 coder-config ui install # optionally, autostart on MacOS It can also be installed as a PWA and live in your taskbar. Open source, runs locally, no account needed. Feedback and contributions welcome! Sorry, haven't had any chance to test on other OSes (linux/windows) https://ift.tt/BhNCi8j January 28, 2026 at 07:44PM
Tuesday, January 27, 2026
Show HN: An open-source starter for developing with Postgres and ClickHouse https://ift.tt/eSZiIl8
Show HN: An open-source starter for developing with Postgres and ClickHouse https://ift.tt/CGvL2oP January 27, 2026 at 09:46PM
Show HN: Decrypting the Zodiac Z32 triangulates a 100ft triangular crop mark https://ift.tt/B6CnJDg
Show HN: Decrypting the Zodiac Z32 triangulates a 100ft triangular crop mark https://ift.tt/zEKGc42 January 27, 2026 at 11:12PM
Monday, January 26, 2026
Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry https://ift.tt/1gzbmtN
Show HN: LocalPass offline password manager. Zero cloud. Zero telemetry I’ve released LocalPass — a local‑first, offline password manager with zero cloud, zero telemetry, and zero vendor lock‑in. 100% local storage, 100% open‑source. https://ift.tt/rPCwvmq January 27, 2026 at 02:38AM
Show HN: Ourguide – OS wide task guidance system that shows you where to click https://ift.tt/etsCgXr
Show HN: Ourguide – OS wide task guidance system that shows you where to click Hey! I'm eshaan and I'm building Ourguide -an on-screen task guidance system that can show you where to click step-by-step when you need help. I started building this because whenever I didn’t know how to do something on my computer, I found myself constantly tabbing between chatbots and the app, pasting screenshots, and asking “what do I do next?” Ourguide solves this with two modes. In Guide mode, the app overlays your screen and highlights the specific element to click next, eliminating the need to leave your current window. There is also Ask mode, which is a vision-integrated chat that captures your screen context—which you can toggle on and off anytime -so you can ask, "How do I fix this error?" without having to explain what "this" is. It’s an Electron app that works OS-wide, is vision-based, and isn't restricted to the browser. Figuring out how to show the user where to click was the hardest part of the process. I originally trained a computer vision model with 2300 screenshots to identify and segment all UI elements on a screen and used a VLM to find the correct icon to highlight. While this worked extremely well—better than SOTA grounding models like UI Tars—the latency was just too high. I'll be making that CV+VLM pipeline OSS soon, but for now, I’ve resorted to a simpler implementation that achieves <1s latency. You may ask: if I can show you where to click, why can't I just click too? While trying to build computer-use agents during my job in Palo Alto, I hit the core limitation of today’s computer-use models where benchmarks hover in the mid-50% range (OSWorld). VLMs often know what to do but not what it looks like; without reliable visual grounding, agents misclick and stall. So, I built computer use—without the "use." It provides the visual grounding of an agent but keeps the human in the loop for the actual execution to prevent misclicks. I personally use it for the AWS Console's "treasure hunt" UI, like creating a public S3 bucket with specific CORS rules. It’s also been surprisingly helpful for non-technical tasks, like navigating obscure settings in Gradescope or Spotify. Ourguide really works for any task when you’re stuck or don't know what to do. You can download and test Ourguide here: https://ourguide.ai/downloads The project is still very early, and I’d love your feedback on where it fails, where you think it worked well, and which specific niches you think Ourguide would be most helpful for. https://ourguide.ai January 26, 2026 at 10:19PM
Show HN: Hybrid Markdown Editing https://ift.tt/Ja0Ddnt
Show HN: Hybrid Markdown Editing Shows rendered preview for unfocused lines and raw markdown for the line or block being edited. https://tiagosimoes.github.io/codemirror-markdown-hybrid/ January 26, 2026 at 11:16PM
Sunday, January 25, 2026
Show HN: I used my book generator to generate a catalog of books it can generate https://ift.tt/SBDMhFI
Show HN: I used my book generator to generate a catalog of books it can generate https://ift.tt/7rlZdqR January 25, 2026 at 11:26PM
Show HN: Uv-pack – Pack a uv environment for later portable (offline) install https://ift.tt/oavfO3E
Show HN: Uv-pack – Pack a uv environment for later portable (offline) install I kept running into the same problem: modern Python tooling, but deployments to air-gapped systems are a pain. Even with uv, moving a fully locked environment into a network-isolated machine was no fun. uv-pack should make this task less frustrating. It bundles a locked uv environment into a single directory that installs fully offline—dependencies, local packages, and optionally a portable Python interpreter. Copy it over, run one script, and you get the exact same environment every time. Just released, would love some feedback! https://ift.tt/A9YXzsl January 25, 2026 at 10:56PM
Saturday, January 24, 2026
Show HN: Remote workers find your crew https://ift.tt/anlMDsk
Show HN: Remote workers find your crew Working from home? Are you a remote employee that "misses" going to the office? Well let's be clear on what you actually miss. No one misses that feeling of having to go and be there 8 hours. But many people miss friends. They miss being part of a crew. Going to lunch, hearing about other people's lives in person not over zoom. Join a co-working space you say? Yes. We have. It's like walking into a library and trying to talk to random people and getting nothing back. Zero part of a crew feeling. https://ift.tt/2rypVFD This app helps you find a crew and meet up for work and get that crew feeling. This is my first time using cloudflare workers for a webapp. The free plan is amazing! You get so much compare to anything else out there in terms of limits. The sqlite database they give you is just fine, I don't miss psql. January 24, 2026 at 10:24PM
Show HN: Polymcp – Turn Any Python Function into an MCP Tool for AI Agents https://ift.tt/vzu04WD
Show HN: Polymcp – Turn Any Python Function into an MCP Tool for AI Agents I built Polymcp, a framework that allows you to transform any Python function into an MCP (Model Context Protocol) tool ready to be used by AI agents. No rewriting, no complex integrations. Examples Simple function: from polymcp.polymcp_toolkit import expose_tools_http def add(a: int, b: int) -> int: """Add two numbers""" return a + b app = expose_tools_http([add], title="Math Tools") Run with: uvicorn server_mcp:app --reload Now add is exposed via MCP and can be called directly by AI agents. API function: import requests from polymcp.polymcp_toolkit import expose_tools_http def get_weather(city: str): """Return current weather data for a city""" response = requests.get(f" https://ift.tt/7Boukwh ") return response.json() app = expose_tools_http([get_weather], title="Weather Tools") AI agents can call get_weather("London") to get real-time weather data instantly. Business workflow function: import pandas as pd from polymcp.polymcp_toolkit import expose_tools_http def calculate_commissions(sales_data: list[dict]): """Calculate sales commissions from sales data""" df = pd.DataFrame(sales_data) df["commission"] = df["sales_amount"] * 0.05 return df.to_dict(orient="records") app = expose_tools_http([calculate_commissions], title="Business Tools") AI agents can now generate commission reports automatically. Why it matters for companies • Reuse existing code immediately: legacy scripts, internal libraries, APIs. • Automate complex workflows: AI can orchestrate multiple tools reliably. • Plug-and-play: multiple Python functions exposed on the same MCP server. • Reduce development time: no custom wrappers or middleware needed. • Built-in reliability: input/output validation and error handling included. Polymcp makes Python functions immediately usable by AI agents, standardizing integration across enterprise software. Repo: https://ift.tt/qebDa37 January 24, 2026 at 11:27PM
Friday, January 23, 2026
Show HN: Dwm.tmux – a dwm-inspired window manager for tmux https://ift.tt/GK2HRfq
Show HN: Dwm.tmux – a dwm-inspired window manager for tmux Hey, HN! With all recent agentic workflows being primarily terminal- and tmux-based, I wanted to share a little project I created about decade ago. I've continued to use this as my primary terminal "window manager" and wanted to share in case others might find it useful. I would love to hear about other's terminal-based workflows and any other tools you may use with similar functionality. https://ift.tt/Uy6bJ3c January 24, 2026 at 04:15AM
Show HN: We added a CLI for receiving webhooks locally (no ngrok required) https://ift.tt/K0PpYWz
Show HN: We added a CLI for receiving webhooks locally (no ngrok required) https://hookverify.com January 24, 2026 at 02:33AM
Show HN: 83 browser-use trajectories, visualized https://ift.tt/1Dsluez
Show HN: 83 browser-use trajectories, visualized Hey all, Justin here. I previously built Phind, the AI search engine for developers. One of the biggest problems we had there was figuring out what went wrong with bad searches. We had tons of searches per day, but less than 1% of users gave any explicit feedback. So we were either manually digging through searches or making general system improvements and hoping they helped. This problem gets harder with agents. Traces are longer and more complex. It takes more effort to review them, so I'm building a tool that lets you analyze LLM outputs directly to help developers of LLM apps and agents understand where things are breaking and why. I've put together a demo using browser-use agent traces (gpt-5): https://trails-red.vercel.app/viewer It's early, but I have lots of ideas - live querying of past failures for currently-running agents, preference models to expand sparse signal data. Would love feedback on the demo. Also if you're building agents and have 10k+ traces per day that you're not looking at but would like to, I'd love to talk. https://trails-red.vercel.app/viewer January 23, 2026 at 11:50PM
Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review https://ift.tt/MzAUg1b
Show HN: Obsidian Workflows with Gemini: Inbox Processing and Task Review https://gist.github.com/juanpabloaj/59bc13fbed8a0f8e87791a3fb0360c19 January 23, 2026 at 10:33PM
Thursday, January 22, 2026
Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params) https://ift.tt/9rRyk8c
Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params) Writeup (includes good/bad sample generations): https://ift.tt/Zysj5hO We're Sahil and Manu, two brothers who spent the last 2 years training text-to-video models from scratch. Today we're releasing them under Apache 2.0. These are 2B param models capable of generating 2-5 seconds of footage at either 360p or 720p. In terms of model size, the closest comparison is Alibaba's Wan 2.1 1.3B. From our testing, we get significantly better motion capture and aesthetics. We're not claiming to have reached the frontier. For us, this is a stepping stone towards SOTA - proof we can train these models end-to-end ourselves. Why train a model from scratch? We shipped our first model in January 2024 (pre-Sora) as a 180p, 1-second GIF bot, bootstrapped off Stable Diffusion XL. Image VAEs don't understand temporal coherence, and without the original training data, you can't smoothly transition between image and video distributions. At some point you're better off starting over. For v2, we use T5 for text encoding, Wan 2.1 VAE for compression, and a DiT-variant backbone trained with flow matching. We built our own temporal VAE but Wan's was smaller with equivalent performance, so we used it to save on embedding costs. (We'll open-source our VAE shortly.) The bulk of development time went into building curation pipelines that actually work (e.g., hand-labeling aesthetic properties and fine-tuning VLMs to filter at scale). What works: Cartoon/animated styles, food and nature scenes, simple character motion. What doesn't: Complex physics, fast motion (e.g., gymnastics, dancing), consistent text. Why build this when Veo/Sora exist? Products are extensions of the underlying model's capabilities. If users want a feature the model doesn't support (character consistency, camera controls, editing, style mapping, etc.), you're stuck. To build the product we want, we need to update the model itself. That means owning the development process. It's a bet that will take time (and a lot of GPU compute) to pay off, but we think it's the right one. What’s next? - Post-training for physics/deformations - Distillation for speed - Audio capabilities - Model scaling We kept a “lab notebook” of all our experiments in Notion. Happy to answer questions about building a model from 0 → 1. Comments and feedback welcome! https://ift.tt/2Y4bclm January 22, 2026 at 08:31PM
Show HN: Synesthesia, make noise music with a colorpicker https://ift.tt/NW1OcLG
Show HN: Synesthesia, make noise music with a colorpicker This is a (silly, little) app which lets you make noise music using a color picker as an instrument. When you click on a specific point in the color picker, a bit of JavaScript maps the binary representation of the clicked-on color's hex-code to a "chord" in the 24 tone-equal-temperament scale. That chord is then played back using a throttled audio generation method which was implemented via Tone.js. NOTE! Turn the volume way down before using the site. It is noise music. :) https://visualnoise.ca January 22, 2026 at 09:52AM
Wednesday, January 21, 2026
Show HN: Semantic search engine for Studio Ghibli movie https://ift.tt/WqKJoBe
Show HN: Semantic search engine for Studio Ghibli movie Hi HN! I built Ghibli Search, a semantic search engine for Studio Ghibli movie scenes (e.g. Spirited Away, My Neighbor Totoro, Howl's Moving Castle, etc.). Describe a dreamscape like "flying through clouds at sunset" or upload an image, and it finds visually similar scenes from the films. Live demo: https://ghibli-search.anini.workers.dev/ Full Cloudflare stack: Workers, AI Search, R2, Workers AI Open source: https://ift.tt/Z6IMVrl Would love feedback on the search quality and any ideas for improvements! https://ghibli-search.anini.workers.dev/ January 21, 2026 at 06:01PM
Show HN: Retain – A unified knowledge base for all your AI coding conversations https://ift.tt/orP8isv
Show HN: Retain – A unified knowledge base for all your AI coding conversations Hey HN! I built Retain as the evolution of claude-reflect (github.com/BayramAnnakov/claude-reflect). The original problem: I use Claude Code/Codex daily for coding, plus claude.ai and ChatGPT occasionally. Every conversation contains decisions, corrections, and patterns I forget existed weeks later. I kept re-explaining the same preferences. claude-reflect was a CLI tool that extracted learnings from Claude Code sessions. Retain takes this further with a native macOS app that: - Aggregates conversations from Claude Code, claude.ai, ChatGPT, and Codex CLI - Instant full-text search across thousands of conversations (SQLite + FTS5) It's local-first - all data stays in a local SQLite database. No servers, no telemetry. Web sync uses your browser cookies to fetch conversations directly. https://ift.tt/RedUqsN January 21, 2026 at 11:59PM
Show HN: Mirage – Experimental Java obf using reflection to break direct calls https://ift.tt/QcaXdDl
Show HN: Mirage – Experimental Java obf using reflection to break direct calls Replaces method calls and field accesses with reflection equivalents → makes static analysis and decompilers much less useful. Experimental, performance hit expected. https://ift.tt/cqBzLeg January 21, 2026 at 08:52PM
Tuesday, January 20, 2026
Show HN: Arch Linux installation lab notes turned into a clean guide https://ift.tt/Jk8DNq4
Show HN: Arch Linux installation lab notes turned into a clean guide Hi HN, I turned my personal Arch Linux installation notes into a public guide and wanted to share it. It is a set of lab notes for a fully manual install, with the exact commands written down along with the reasoning behind each choice. It is, of course, not a replacement for the Arch Wiki. The guide is structured as a modular walkthrough, with clear paths for choices like ext4 vs Btrfs, optional full disk encryption, optional NVIDIA drivers, and different package selections. It also covers how to perform the install over SSH for easy copy and paste, NVMe 4K alignment, TRIM passthrough with LUKS, and systemd-boot UEFI boot manager. The main goal was to reduce the amount of re-research needed for each install while keeping everything explicit and understandable. I also used this as an excuse to experiment with writing documentation using Zensical and to try applying most of the features it provides. Hopefully I did not overdo it. The guide is open source and licensed under Apache 2.0 or MIT, so you can fork it and adapt it to your own setup. Would love any feedback. https://ift.tt/CQJnm3O January 20, 2026 at 09:37PM
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs https://ift.tt/nRxHh7I
Show HN: Mastra 1.0, open-source JavaScript agent framework from the Gatsby devs Hi HN, we're Sam, Shane, and Abhi. Almost a year ago, we first shared Mastra here ( https://ift.tt/Zl9DEiq ). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback. Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed. If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability. Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity. Agent development is changing quickly, so we’ve added a lot since February: - Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks. - Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part. - Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage. - Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server. (That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.) Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`. We'll be around and happy to answer any questions! https://ift.tt/KareqkJ January 20, 2026 at 08:38PM
Monday, January 19, 2026
Show HN: An interactive physics simulator with 1000's of balls, in your terminal https://ift.tt/h68SH2T
Show HN: An interactive physics simulator with 1000's of balls, in your terminal https://ift.tt/KvF9Xe3 January 19, 2026 at 09:47PM
Sunday, January 18, 2026
Show HN: I quit coding years ago. AI brought me back https://ift.tt/4Ym7GgK
Show HN: I quit coding years ago. AI brought me back Quick background: I used to code. Studied it in school, wrote some projects, but eventually convinced myself I wasn't cut out for it. Too slow, too many bugs, imposter syndrome — the usual story. So I pivoted, ended up as an investment associate at an early-stage angel fund, and haven't written real code in years. Fast forward to now. I'm a Buffett nerd — big believer in compound interest as a mental model for life. I run compound interest calculations constantly. Not because I need to, but because watching numbers grow over 30-40 years keeps me patient when markets get wild. It's basically meditation for long-term investors. The problem? Every compound interest calculator online is terrible. Ugly interfaces, ads covering half the screen, can't customize compounding frequency properly, no year-by-year breakdowns. I've tried so many. They all suck. When vibe coding started blowing up, something clicked. Maybe I could actually build the calculators I wanted? I don't have to be a "real developer" anymore — I just need to describe what I want clearly. So I tried it. Two weeks and ~$100(Opus 4.5 thinking model) in API costs later: I somehow have 60+ calculators. Started with compound interest, naturally. Then thought "well, while I'm here..." and added mortgage, loan amortization, savings goals, retirement projections. Then it spiraled — BMI calculator, timezone converter, regex tester. Oops. The AI (I'm using Claude via Windsurf) handled the grunt work beautifully. I'd describe exactly what I wanted — "compound interest calculator with monthly/quarterly/yearly options, year-by-year breakdown table, recurring contribution support" — and it delivered. With validation, nice components, even tests. What I realized: my years away from coding weren't wasted. I still understood architecture, I still knew what good UX looked like, I still had domain expertise (financial math). I just couldn't type it all out efficiently. AI filled that gap perfectly. Vibe coding didn't make me a 10x engineer. But it gave me permission to build again. Ideas I've had for years suddenly feel achievable. That's honestly the bigger win for me. Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean." Site's live at https://calquio.com . The compound interest calculator is still my favorite page — finally exactly what I wanted. Curious if others have similar stories. Anyone else come back to building after stepping away? https://calquio.com January 19, 2026 at 04:50AM
Show HN: Beats, a web-based drum machine https://ift.tt/ZG2L0Kh
Show HN: Beats, a web-based drum machine Hello all! I've been an avid fan of Pocket Operators by Teenage Engineering since I found out about them. I even own an EP-133 K.O. II today, which I love. A couple of months ago, Reddit user andiam03 shared a Google Sheet with some drum patterns [1]. I thought it was a very cool way to share and understand beats. During the weekend I coded a basic version of this app I am sharing today. I iterated over it in my free time, and yesterday I felt like I had a pretty good version to share with y'all. It's not meant to be a sequencer but rather a way to experiment with beats and basic sounds, save them, and use them in your songs. It also has a sharing feature with a link. It was built using Tone.js [2], Stimulus [3] and deployed in Render [4] as a static website. I used an LLM to read the Tone.js documentation and generate sounds, since I have no knowledge about sound production, and modified from there. Anyway, hope you like it! I had a blast building it. [0]: https://ift.tt/ORaXDQ6 [1]: https://docs.google.com/spreadsheets/d/1GMRWxEqcZGdBzJg52soe... [2]: https://tonejs.github.io [3]: https://ift.tt/JnuZbsm [4]: http://render.com https://ift.tt/sLgCz0h January 19, 2026 at 01:10AM
Show HN: Map of illegal dumping reports in Oakland https://ift.tt/L6RHQCa
Show HN: Map of illegal dumping reports in Oakland https://illegal-dumping-map.vercel.app/oakland January 19, 2026 at 12:24AM
Show HN: Dock – Slack minus the bloat, tax, and 90-day memory loss https://ift.tt/cDoja7r
Show HN: Dock – Slack minus the bloat, tax, and 90-day memory loss https://getdock.io/ January 19, 2026 at 12:42AM
Saturday, January 17, 2026
Show HN: Docker.how – Docker command cheat sheet https://ift.tt/EgbGhy1
Show HN: Docker.how – Docker command cheat sheet https://docker.how/ January 18, 2026 at 12:17AM
Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) https://ift.tt/esx6f1n
Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) Hi HN, I’m releasing minikv, a distributed key-value and object store in Rust. What is minikv? minikv is an open-source, distributed storage engine built for learning, experimentation, and self-hosted setups. It combines a strongly-consistent key-value database (Raft), S3-compatible object storage, and basic multi-tenancy. I started minikv as a learning project about distributed systems, and it grew into something production-ready and fun to extend. Features/highlights: - Raft consensus with automatic failover and sharding - S3-compatible HTTP API (plus REST/gRPC APIs) - Pluggable storage backends: in-memory, RocksDB, Sled - Multi-tenant: per-tenant namespaces, role-based access, quotas, and audit - Metrics (Prometheus), TLS, JWT-based API keys - Easy to deploy (single binary, works with Docker/Kubernetes) Quick demo (single node): git clone https://ift.tt/U1Mrpot cd minikv cargo run --release -- --config config.example.toml curl localhost:8080/health/ready # S3 upload + read curl -X PUT localhost:8080/s3/mybucket/hello -d "hi HN" curl localhost:8080/s3/mybucket/hello Docs, cluster setup, and architecture details are in the repo. I’d love to hear feedback, questions, ideas, or your stories running distributed infra in Rust! Repo: https://ift.tt/ndfgwyj Crate: https://ift.tt/J8KRcoi https://ift.tt/ndfgwyj January 17, 2026 at 11:39PM
Friday, January 16, 2026
Show HN: Kerns – A Workspace for Deep, Ongoing Research https://ift.tt/i56mJRb
Show HN: Kerns – A Workspace for Deep, Ongoing Research Deep research rarely happens in a single pass. For high-stakes work, or to deeply understand something, you run many deep researches in parallel, revisit them over time, and synthesize understanding gradually. We combine deep research with other docs. Kerns is built for this mode of research. It groups multiple deep researches under a single research area, so follow-ups and side investigations accumulate instead of fragmenting across chats and docs. Outputs are structured so you can start shallow and selectively go deep—because you don’t know upfront which deep researches will matter. Synthesis is an explicit second step. Kerns helps you connect and reconcile insights across deep researches, grounded in the sourced material rather than one-off summaries. This stage also lets you consult other docs on the same level as deep researches. Research doesn’t stop once a report is written. Kerns passively keeps your work up to date by monitoring sources and surfacing meaningful changes, so staying current doesn’t require restarting. Built for researchers, analysts, investors, and serious self-learners doing multi-week or multi-month research where clarity and correctness actually matter. Would love feedback! https://www.kerns.ai/ January 17, 2026 at 12:08AM
Show HN: Routing with OSM, PgRouting and MapLibre https://ift.tt/n7MYz2e
Show HN: Routing with OSM, PgRouting and MapLibre I’ve been working on an example application for routing with OSM data. The repository makes it easy to experiment with the implementation. The Angular Frontend, uses maplibre-gl for displaying the map and visualizing the routes. https://ift.tt/3eblEOn January 16, 2026 at 11:13PM
Show HN: Contribute to GitHub Anonymously https://ift.tt/4aZRojb
Show HN: Contribute to GitHub Anonymously gitGost allows anonymous contributions to public GitHub repositories. It removes author info, email, timestamps, and opens PRs from a neutral bot. No accounts, OAuth, or tokens required. Built in Go, open source (AGPL-3.0), with abuse prevention via rate limits and validation. Feedback welcome. https://ift.tt/Ut1HECf January 16, 2026 at 11:38PM
Show HN: CC TV remote plugin, pauses your binge-watching when Claude goes idle https://ift.tt/boOxWdl
Show HN: CC TV remote plugin, pauses your binge-watching when Claude goes idle https://ift.tt/Zw7Uh8r January 16, 2026 at 10:53PM
Thursday, January 15, 2026
Show HN: Beni AI – Real-time face-to-face AI companion https://ift.tt/VXq416O
Show HN: Beni AI – Real-time face-to-face AI companion Beni AI is a real-time AI companion for face-to-face conversations. It’s built to feel emotionally natural, consistent, and personal, with long-term memory that evolves through interaction. Try Beni AI here: >> https://thebeni.ai/ https://thebeni.ai/ January 15, 2026 at 04:51AM
Show HN: The Hessian of tall-skinny networks is easy to invert https://ift.tt/FBLVZzK
Show HN: The Hessian of tall-skinny networks is easy to invert It turns out the inverse of the Hessian of a deep net is easy to apply to a vector. Doing this naively takes cubically many operations in the number of layers (so impractical), but it's possible to do this in time linear in the number of layers (so very practical)! This is possible because the Hessian of a deep net has a matrix polynomial structure that factorizes nicely. The Hessian-inverse-product algorithm that takes advantage of this is similar to running backprop on a dual version of the deep net. It echoes an old idea of Pearlmutter's for computing Hessian-vector products. Maybe this idea is useful as a preconditioner for stochastic gradient descent? https://ift.tt/L5sPNRu January 16, 2026 at 12:36AM
Show HN: I built an 11MB offline PDF editor because mobile Acrobat is 500MB https://ift.tt/2VBd5X3
Show HN: I built an 11MB offline PDF editor because mobile Acrobat is 500MB https://revpdf.com/ January 15, 2026 at 11:00PM
Wednesday, January 14, 2026
Show HN: Webctl – Browser automation for agents based on CLI instead of MCP https://ift.tt/6VXwve3
Show HN: Webctl – Browser automation for agents based on CLI instead of MCP https://ift.tt/8w65MXK January 14, 2026 at 06:34PM
Show HN: Repomance: A Tinder style app for GitHub repo discovery https://ift.tt/UBbOsrg
Show HN: Repomance: A Tinder style app for GitHub repo discovery Hi everyone, Repomance is an app for discovering curated and trending repositories. Swipe to star them directly using your GitHub account. It is currently available on iOS, iPadOS, and macOS. I plan to develop an Android version once the app reaches 100 users. Repomance is open source: https://ift.tt/aAit6dZ All feedback is welcome, hope you enjoy using it. https://ift.tt/S5MerGJ January 14, 2026 at 10:54PM
Tuesday, January 13, 2026
Show HN: Nogic – VS Code extension that visualizes your codebase as a graph https://ift.tt/FavIX4c
Show HN: Nogic – VS Code extension that visualizes your codebase as a graph I built Nogic, a VSCode extension currently, because AI tools make code grow faster than developers can build a mental model by jumping between files. Exploring structure visually has been helping me onboard to unfamiliar codebases faster. It’s early and rough, but usable. Would love feedback on whether this is useful and what relationships are most valuable to visualize. https://marketplace.visualstudio.com/items?itemName=Nogic.nogic January 13, 2026 at 10:43PM
Show HN: MemSky: Bluesky timeline viewer web app that saves where you left off https://ift.tt/wUnpZbQ
Show HN: MemSky: Bluesky timeline viewer web app that saves where you left off Hi HN! I created this web app (with PWA support) because the official Bluesky app lacks the one thing I want: to start me at the place I left off in the chronological timeline. This is intended as a reader, not a full Bluesky client. I use it in conjunction with the Bluesky app or website. Tap the timestamp of a post to open it in Bluesky. I’ve tried my best to display stable timeline updates that match what the Tweetie app pioneered so long ago: keeping your visual place in the timeline when refreshing. This was tricky to do in a web app… I wrote more about it here[0]. 0: https://memalign.github.io/p/memsky.html https://memalign.github.io/m/memsky/index.html January 13, 2026 at 11:23PM
Show HN: Timberlogs – Drop-in structured logging for TypeScript https://ift.tt/6jIqR7S
Show HN: Timberlogs – Drop-in structured logging for TypeScript Hi HN! I built Timberlogs because I was tired of console.log in production and existing logging solutions requiring too much setup. Timberlogs is a drop-in structured logging library for TypeScript: npm install timberlogs-client import { createTimberlogs } from "timberlogs-client"; const timber = createTimberlogs({ source: "my-app", environment: "production", apiKey: process.env.TIMBER_API_KEY, }); timber.info("User signed in", { userId: "123" }); timber.error("Payment failed", error); Features: - Auto-batching with retries - Automatic redaction of sensitive data (passwords, tokens) - Full-text search across all your logs - Real-time dashboard - Flow tracking to link related logs It's currently in beta and free to use. Would love feedback from the HN community. Site: https://timberlogs.dev Docs: https://ift.tt/PQG63Fg npm: https://ift.tt/dwN5imt GitHub: https://ift.tt/fGmvKig January 13, 2026 at 10:43PM
Monday, January 12, 2026
Show HN: Blockchain-Based Equity with Separated Economic and Governance Rights https://ift.tt/p4hnPW1
Show HN: Blockchain-Based Equity with Separated Economic and Governance Rights I've been researching blockchain-based capital markets and developed a framework for tokenized equity with separated economic, dividend, and governance rights. Core idea: Instead of bundling everything into one share, issue three token types: - LOBT: Economic participation, no governance - PST: Automated dividends, no ownership - OT: Full governance control Key challenge: Verifying real-world business operations on-chain without trusted intermediaries. I propose decentralized oracles + ZK proofs, but acknowledge significant unsolved problems. This is research/RFC seeking technical feedback on oracle architecture, regulatory viability, and which verticals this makes sense for. Thoughts? https://ift.tt/gIFD5no January 13, 2026 at 04:33AM
Show HN: Sophomore at UMich, built an app with my dad https://ift.tt/HJC93WI
Show HN: Sophomore at UMich, built an app with my dad Over the past three months, I’ve been working with my dad on this app as a way to learn Cursor, shadcn/ui, and Next.js. I’m currently studying UX design at the University of Michigan, and I’ve been really impressed by how much vibe-coding tools have advanced over the last six months. This work journal includes private AI-generated insights with scheduled summaries that can be shared with your team or manager. You can also choose to make a summary public—like I’ve done here for my work on this web app: https://ift.tt/43bXDaB If anyone has feedback or suggestions on how I could improve my design or coding skills, I’d really appreciate it. https://ift.tt/nEyhzCl January 13, 2026 at 12:20AM
Show HN: Agent-of-empires: OpenCode and Claude Code session manager https://ift.tt/O4YZpLj
Show HN: Agent-of-empires: OpenCode and Claude Code session manager Hi! I’m Nathan: an ML Engineer at Mozilla.ai: I built agent-of-empires (aoe): a CLI application to help you manage all of your running Claude Code/Opencode sessions and know when they are waiting for you. - Written in rust and relies on tmux for security and reliability - Monitors state of cli sessions to tell you when an agent is running vs idle vs waiting for your input - Manage sessions by naming them, grouping them, configuring profiles for various settings I'm passionate about getting self-hosted open-weight LLMs to be valid options to compete with proprietary closed models. One roadblock for me is that although tools like opencode allow you to connect to Local LLMs (Ollama, lm studio, etc), they generally run muuuuuch slower than models hosted by Anthropic and OpenAI. I would start a coding agent on a task, but then while I was sitting waiting for that task to complete, I would start opening new terminal windows to start multitasking. Pretty soon, I was spending a lot of time toggling between terminal windows to see which one needed me: like help in adding a clarification, approving a new command, or giving it a new task. That’s why I build agent-of-empires (“aoe”). With aoe, I can launch a bunch of opencode and Claude Code sessions and quickly see their status or toggle between them, which helps me avoid having a lot of terminal windows open, or having to manually attach and detach from tmux sessions myself. It’s helping me give local LLMs a fair try, because them being slower is now much less of a bottleneck. You can give it an install with curl -fsSL https://ift.tt/BSiUqdr... | bash Or brew install njbrake/aoe/aoe And then launch by simply entering the command `aoe`. I’m interested in what you think as well as what features you think would be useful to add! I am planning to add some further features around sandboxing (with docker) as well as support for intuitive git worktrees and am curious if there are any opinions about what should or shouldn’t be in it. I decided against MCP management or generic terminal usage, to help keep the tool focused on parts of agentic coding that I haven’t found a usable solution for. I hit the character limit on this post which prevented me from including a view of the output, but the readme on the github link has a screenshot showing what it looks like. Thanks! https://ift.tt/SGEa0gH January 12, 2026 at 06:23PM
Show HN: AI video generator that outputs React instead of video files https://ift.tt/zixpOUT
Show HN: AI video generator that outputs React instead of video files Hey HN! This is Mayank from Outscal with a new update. Our website is now live. Quick context: we built a tool that generates animated videos from text scripts. The twist: instead of rendering pixels, it outputs React/TSX components that render as the video. Try it: https://ai.outscal.com/ Sample video: https://ift.tt/Z4ztoBm... You pick a style (pencil sketch or neon), enter a script (up to 2000 chars), and it runs: scene direction → ElevenLabs audio → SVG assets → Scene Design → React components → deployed video. What we learned building this: We built the first version on Claude Code. Even with a human triggering commands, agents kept going off-script — they had file tools and would wander off reading random files, exploring tangents, producing inconsistent output. The fix was counterintuitive: fewer tools, not more guardrails. We stripped each agent to only what it needed and pre-fed context instead of letting agents fetch it themselves. Quality improved immediately. We wouldn't launch the web version until this was solid. Moved to Claude Agent SDK, kept the same constraints, now fully automated. Happy to discuss the agent architecture, why React-as-video, or anything else. https://ai.outscal.com/ January 12, 2026 at 11:03PM
Sunday, January 11, 2026
Show HN: Interactive California Budget (by Claude Code) https://ift.tt/KtVdGu6
Show HN: Interactive California Budget (by Claude Code) There's been a lot of discussion around the california budget and some proposed tax policies, so I asked claude code to research the budget and turn it into an interactive dashboard. Using async subagents claude was able to research ~a dozen budget line items at once across multiple years, adding lots of helpful context and graphs to someone like me who was starting with little familiarity. It still struggles with frontend changes, but for research this probably 20-40x's my throughput. Let me know any additional data or visualizations that would be interesting to add! https://ift.tt/Zjls860 January 11, 2026 at 09:13PM
Show HN: Epstein IM – talk to Epstein clone in iMessage https://ift.tt/Z7C1oth
Show HN: Epstein IM – talk to Epstein clone in iMessage https://epstein.im/ January 11, 2026 at 04:58AM
Saturday, January 10, 2026
Show HN: Persistent Memory for Claude Code (MCP) https://ift.tt/6So7Dbz
Show HN: Persistent Memory for Claude Code (MCP) This is my attempt in building a memory that evolves and persist for claude code. My approach is inspired from Zettelkasten method, memories are atomic, connected and dynamic. Existing memories can evolve based on newer memories. In the background it uses LLM to handle linking and evolution. I have only used it with claude code so far, it works well with me but still early stage, so rough edges likely. I'm planning to extend it to other coding agents as I use several different agents during development. Looking for feedbacks! https://ift.tt/V9zjLfb January 11, 2026 at 12:34AM
Show HN: I used Claude Code to discover connections between 100 books https://ift.tt/BQ0hY9I
Show HN: I used Claude Code to discover connections between 100 books I think LLMs are overused to summarise and underused to help us read deeper. I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them. I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising. On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison. One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans ( https://ift.tt/atjFM9V ). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset. Details: * The books are picked from HN’s favourites (which I collected before: https://ift.tt/X3nRNpF ). * Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10. * Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes. * There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window. * Everything is stored in SQLite and manipulated using a set of CLI tools. I wrote more about the process here: https://ift.tt/oJSR8wO I’m curious if this way of reading resonates for anyone else - LLM-mediated or not. https://ift.tt/QewxTDW January 10, 2026 at 08:56PM
Friday, January 9, 2026
Show HN: Accio – Summon apps with keyboard shortcuts https://ift.tt/aQCwJZE
Show HN: Accio – Summon apps with keyboard shortcuts https://ift.tt/z1fZibl January 9, 2026 at 11:41PM
Show HN: EuConform – Offline-first EU AI Act compliance tool (open source) https://ift.tt/zQ4T7LH
Show HN: EuConform – Offline-first EU AI Act compliance tool (open source) I built this as a personal open-source project to explore how EU AI Act requirements can be translated into concrete, inspectable technical checks. The core idea is local-first compliance: – risk classification (Articles 5–15, incl. prohibited use cases) – bias evaluation using CrowS-Pairs – automatic Annex IV–oriented PDF reports – no cloud services or external APIs (browser-based + Ollama) I’m especially interested in feedback on whether this kind of technical framing of AI regulation makes sense in real-world projects. https://ift.tt/tLPK8lz January 9, 2026 at 11:11PM
Thursday, January 8, 2026
Show HN: A geofence-based social network app 6 years in development https://ift.tt/Hyr4Om3
Show HN: A geofence-based social network app 6 years in development My name is Adrian. I'm a Software Engineer and I spent 6 years developing a perimeter-based geofence-based social media app. What it does: - Allows you to load a custom perimeter anywhere on the geographic map (180° E and W longitude and 90° N and S latitude), to cover area any area of interest - Chat rooms get loaded within the perimeter - You can chat with people within the perimeter I developed a mobile app that uses an advanced geofence-based networking system from 2013 to 2019. My goal was to connect uses within polygon geofences anywhere in the world. The app is capable of loading millions of polygon geofences anywhere in the world. https://ift.tt/UnRZbuE... But people didn't really have a need for this. So after failing, I spent the next 6 years trying to ideas to use FencedIn for. I tried a location-based video app and a place-based app that had multiple features. Nothing worked, but now I'm almost finished developing ChatLocal, an app that allows you to load a perimeter anywhere on the geographic map, which loads chat rooms. The tech stack is 100% Java (low-level mostly). I have a backend, commons library and an Android app. Java was the natural choice back in 2013. However, I still wouldn't choose anything else today. Java is the best for long-term large-scale projects. (I'm also using WildFly. PostgreSQL and a Linux server.) This app is still not fully finished, but I think the impact on society might be tremendous. The previous app to ChatLocal, LocalVideo, is fully up on the Google Play store and can be tested. It has 88% of the features of ChatLocal, including especially the perimeter-based loading system. The feedback I'm mostly looking for is new ideas and concepts to add to this location-based social media app. And how strong of a value proposition does the app have for society. https://ift.tt/KFw8ptX January 9, 2026 at 12:56AM
Show HN: Remotedays – Cross-border remote work compliance for EU companies https://ift.tt/GYadb8O
Show HN: Remotedays – Cross-border remote work compliance for EU companies Remotedays helps companies maintain compliance with EU cross-border remote work regulations. When employees work remotely across borders (France, Belgium, Germany, Luxembourg), exceeding specific thresholds triggers social security and tax liability shifts for both employees and employers. Thresholds: France/Belgium: 34 days/year Germany: 183 days/year Most companies track this manually or not at all, creating audit and penalty risk. Key Features For Employees: One-click daily declarations via automated email prompts with real-time threshold tracking For HR/Compliance: Real-time compliance dashboard with alerts across the entire workforce Complete audit trail for regulatory inspections Links Live: https://remotedays.app Demo: https://ift.tt/jOc67NV Built with security and GDPR compliance as core requirements. Currently seeking feedback and open to customization or developing similar compliance solutions for specific organizational needs. Questions and feedback welcome. January 9, 2026 at 12:37AM
Show HN: Turn your PRs into marketing updates https://ift.tt/Y86ZuXf
Show HN: Turn your PRs into marketing updates https://personabox.app January 8, 2026 at 11:35PM
Wednesday, January 7, 2026
Show HN: I visualized the entire history of Citi Bike in the browser https://ift.tt/sYJfm3O
Show HN: I visualized the entire history of Citi Bike in the browser Each moving arrow represents one real bike ride out of 291 million, and if you've ever taken a Citi Bike before, you are included in this massive visualization! You can search for your ride using Cmd + K and your Citi Bike receipt, which should give you the time of your ride and start/end station. Everything is open source: https://ift.tt/polBJrk Some technical details: - No backend! Processed data is stored in parquet files on a Cloudflare CDN, and queried directly by DuckDB WASM - deck.gl w/ Mapbox for GPU-accelerated rendering of thousands of concurrent animated bikes - Web Workers decode polyline routes and do as much precomputation as possible off the main thread - Since only (start, end) station pairs are provided, routes are generated by querying OSRM for the shortest path between all 2,400+ station pairs https://bikemap.nyc/ January 7, 2026 at 10:57PM
Show HN: bikemap.nyc – visualization of the entire history of Citi Bike https://ift.tt/JhoLarY
Show HN: bikemap.nyc – visualization of the entire history of Citi Bike Each moving arrow represents a real bike ride. There are 291 million rides in total, covering 12 years of history from June 2013 to December 2025, based on public data published by Lyft. If you've ever taken a Citi Bike ride before, you are included in this massive visualization! You can search for your ride using Cmd + K and your Citi Bike receipt, which should give you the time of your ride and start/end station. Some technical details: - No backend! Processed data is stored in parquet files on a CDN, and queried directly by DuckDB WASM - deck.gl w/ Mapbox for GPU-accelerated rendering of thousands of concurrent animated bikes - Web Workers decode polyline routes and do as much precomputation as possible off the main thread - Since only (start, end) station pairs are provided, routes are generated by querying OSRM for the shortest path between all 2,400+ station pairs Legend: - Blue = E-Bike - Purple = Classic Bike - Red = Bike docked - Green = Bike unlocked https://ift.tt/QHh0m1b January 8, 2026 at 12:45AM
Tuesday, January 6, 2026
Show HN: llmgame.ai – The Wikipedia Game but with LLMs https://ift.tt/visYCVz
Show HN: llmgame.ai – The Wikipedia Game but with LLMs I used to play the Wikipedia Game in high school and had an idea for applying the same mechanic of clicking from concept to concept to LLMs. Will post another version that runs with an LLM entirely in the browser soon, but for now, please enjoy as long as my credits last... Warning: the LLM does not always cooperate https://www.llmgame.ai January 6, 2026 at 07:07AM
Monday, January 5, 2026
Show HN: I made a tool that steals any website's UI into .md context https://ift.tt/T86RMiE
Show HN: I made a tool that steals any website's UI into .md context I made Steal UI because I got tired of wasting hours recreating design systems every time I shipped a new project. Manually inspecting Linear's colors, guessing Stripe’s spacing, or reverse-engineering Vercel’s typography felt like a waste of time. I’m a solution designer at a robotics company by day, but at night, I ship side projects. After killing 5 previous projects because I burned too much time on design instead of shipping, I decided to automate the "DNA" extraction of the web. How it works: Paste any public URL. Steal UI uses a mix of web scraping and AI interpretation to extract exact hex values, fonts, spacing, and shadows. I’ve baked custom rules into the code to ensure the AI follows strict atomic design principles. The output is a single .md file. Drop it in Cursor as @context and you can start building production-ready UI in seconds. Tech Stack: Next.js for the frontend. Headless scraping + AI APIs: I iterated on the prompts and heuristics until the output was clean enough for production code. Looking for feedback https://www.stealui.xyz/ January 6, 2026 at 12:29AM
Show HN: ISO 8583 simulator in Python with LLM-powered message explanation https://ift.tt/0Demkli
Show HN: ISO 8583 simulator in Python with LLM-powered message explanation https://ift.tt/sRu5ItU January 6, 2026 at 12:33AM
Show HN: WOLS – Open standard for mushroom cultivation tracking https://ift.tt/JAHoYjI
Show HN: WOLS – Open standard for mushroom cultivation tracking I built an open labeling standard for tracking mushroom specimens through their lifecycle (from spore/culture to harvest). v1.1 adds clonal generation tracking (distinct from filial/strain generations) and conforms to JSON-LD for interoperability with agricultural/scientific data systems. Spec (CC 4.0): https://ift.tt/8dbyruF Client libraries (Apache 2.0): Python + CLI: pip install wols (also on GHCR) TypeScript/JS: npm install @wemush/wols Background: Mycology has fragmented data practices (misidentified species, inconsistent cultivation logs, no shared vocabulary for tracking genetics across generations). This is an attempt to fix that. Looking for feedback from anyone working with biological specimen tracking, agricultural data systems, or mycology. https://ift.tt/ImbUjFw January 5, 2026 at 10:30PM
Show HN: CloudMasters TUI – Shop Boxes Across AWS, Azure, GCP, Hetzner, Vultr https://ift.tt/QgVU1bW
Show HN: CloudMasters TUI – Shop Boxes Across AWS, Azure, GCP, Hetzner, Vultr https://ift.tt/ZCO2yFD January 5, 2026 at 11:07PM
Sunday, January 4, 2026
Show HN: I made R/place for LLMs https://ift.tt/0n7KSdy
Show HN: I made R/place for LLMs I built AI Place, a vLLM-controlled pixel canvas inspired by r/place. Instead of users placing pixels, an LLM paints the grid continuously and you can watch it evolve live. The theme rotates daily. Currently, the canvas is scored using CLIP ViT-B/32 against a prompt (e.g., Pixelart of ${theme}). The highest-scoring snapshot is saved to the archive at the end of each day. The agents work in a simple loop: Input: Theme + image of current canvas Output: Python code to update specific pixel coordinates + One word description Tech: Next.js, SSE realtime updates, NVIDIA NIM (Mistral Large 3/GPT-OSS/Llama 4 Maverick) for the painting decisions Would love feedback! (or ideas for prompts/behaviors to try) https://art.heimdal.dev January 4, 2026 at 11:50PM
Show HN: Hover – IDE style hover documentation on any webpage https://ift.tt/q1a92PB
Show HN: Hover – IDE style hover documentation on any webpage I thought it would be interesting to have ID style hover docs outside the IDE. Hover is a Chrome extension that gives you IDE style hover tooltips on any webpage: documentation sites, ChatGPT, Claude, etc. How it works: - When a code block comes into view, the extension detects tokens and sends the code to an LLM (via OpenRouter or custom endpoint) - The LLM generates documentation for tokens worth documenting, which gets cached - On hover, the cached documentation is displayed instantly A few things I wanted to get right: - Website permissions are granular and use Chrome's permission system, so the extension only runs where you allow it - Custom endpoints let you skip OpenRouter entirely – if you're at a company with its own infra, you can point it at AWS Bedrock, Google AI Studio, or whatever you have Built with TypeScript, Vite, and the Chrome extension APIs. Coming to the Chrome Web Store soon. Would love feedback on the onboarding experience and general UX – there were a lot of design decisions I wasn't sure about. Happy to answer questions about the implementation. https://ift.tt/z4KVgT2 January 4, 2026 at 10:43PM
Saturday, January 3, 2026
Show HN: ZELF – A modular ELF64 packer with 22 vintage and modern codecs https://ift.tt/SHRvyeE
Show HN: ZELF – A modular ELF64 packer with 22 vintage and modern codecs https://ift.tt/XTuN2Q8 January 3, 2026 at 11:29PM
Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer https://ift.tt/S5adsz7
Show HN: A New Year gift for Python devs–My self-healing project's DNA analyzer I built a system that maps its own "DNA" using AST to enable self-healing capabilities. Instead of a standard release, I’ve hidden the core mapping engine inside a New Year gift file in the repo for those who like to explore code directly. It’s not just a script; it’s the architectural vision behind Ultra Meta. Check the HAPPY_NEW_YEAR.md file for the source https://ift.tt/N4iKj5I January 3, 2026 at 11:20PM
Friday, January 2, 2026
Show HN: Go-Highway – Portable SIMD for Go https://ift.tt/9YckqTG
Show HN: Go-Highway – Portable SIMD for Go Go 1.26 adds native SIMD via GOEXPERIMENT=simd. This library provides a portability layer so the same code runs on AVX2, AVX-512, or falls back to scalar. Inspired by Google's Highway C++ library. Includes vectorized math (exp, log, sin, tanh, sigmoid, erf) since those come up a lot in ML/scientific code and the stdlib doesn't have SIMD versions. algo.SigmoidTransform(input, output) Requires go1.26rc1. Feedback welcome. https://ift.tt/k8Mi0sg January 3, 2026 at 02:36AM
Show HN: Fluxer – open-source Discord-like chat https://ift.tt/gmWV4IX
Show HN: Fluxer – open-source Discord-like chat Hey HN, and happy new year! I'm Hampus Kraft [1], a 22-year-old software developer nearing completion of my BSc in Computer Engineering at KTH Royal Institute of Technology in Sweden. I've been working on Fluxer on and off for about 5 years, but recently decided to work on it full-time and see how far it could take me. Fluxer is an open source [2] communication platform for friends, groups, and communities (text, voice, and video). It aims for "modern chat app" feature coverage with a familiar UX, while being developed in the open and staying FOSS (AGPLv3). The codebase is largely written in TypeScript and Erlang. Try it now (no email or password required): https://ift.tt/WINPXuc – this creates an "unclaimed account" (date of birth only) so you can explore the platform. Unclaimed accounts can create/join communities but have some limitations. You can claim your account with email + password later if you want. I've developed this solo , with limited capital from some early supporters and testers. Please keep this in mind if you find what I offer today lacking; I know it is! I'm sharing this now to find contributors and early supporters who want to help shape this into the chat app you actually want. ~~~ Fluxer is not currently end-to-end encrypted, nor is it decentralised or federated. I'm open to implementing E2EE and federation in the future, but they're complex features, and I didn't want to end up like other community chat apps [3] that get criticised for broken core functionality and missing expected features while chasing those goals. I'm most confident on the backend and web app, so that's where I've focused. After some frustrating attempts with React Native, I'm sticking with a mobile PWA for now (including push notification support) while looking into Skip [4] for a true native app. If someone with more experience in native dev has any thoughts, let me know! Many tech-related communities that would benefit from not locking information into walled gardens still choose Discord or Slack over forum software because of the convenience these platforms bring, a choice that is often criticised [5][6][7]. I will not only work on adding forums and threads, but also enable opt-in publishing of forums to the open web, including RSS/Atom feeds, to give you the best of both worlds. ~~~ I don't intend to license any part of the software under anything but the AGPLv3, limit the number of messages [8], or have an SSO tax [9]. Business-oriented features like SSO will be prioritised on the roadmap with your support. You'd only pay for support and optionally for sponsored features or fixes you'd like prioritised. I don't currently plan on SaaS, but I'm open to support and maintenance contracts. ~~~ I want Fluxer to become an easy-to-deploy, fully FOSS Discord/Slack-like platform for companies, communities, and individuals who want to own their chat infrastructure, or who wish to support an independent and bootstrapped hosted alternative. But I need early adopters and financial support to keep working on it full-time. I'm also very interested in code contributors since this is a challenging project to work on solo. My email is hampus@fluxer.app. ~~~ There’s a lot more to be said; I’ll be around in the comments to answer questions and fix things quickly if you run into issues. Thank you, and wishing you all the best in the new year! [1] https://ift.tt/JK1lPMv [2] https://ift.tt/yTRoxjw [3] https://ift.tt/3dnfIQR [4] https://skip.tools/ [5] https://ift.tt/mil0nH8 [6] https://ift.tt/ZXL0iyY [7] https://ift.tt/UpGyFQc [8] https://ift.tt/LIznZtJ [9] https://sso.tax/ https://fluxer.app January 3, 2026 at 12:00AM
Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/nP3xf5X
Show HN: I mapped System Design concepts to AI Prompts to stop bad code https://ift.tt/u8DhFno January 2, 2026 at 10:45PM
Thursday, January 1, 2026
Show HN: Tools for Humans – Public Tracker for Workflows https://ift.tt/arEO3eF
Show HN: Tools for Humans – Public Tracker for Workflows Hello! I'm working on building out graph-based tools for humans! Our first offering is a public tracker view, similar to the tracker provided by pizza companies in the US! The goal is to enable companies to reduce status emails and build brand trust! TurboOps are built on "workflows" which are just DAGs. We have a lot more plan but are focused on makers right now! It's free to get started and to try it out, and build public graphs! I'd really appreciate your thoughts, impressions, and/or criticisms. Thank you! danny@turbolytics.io https://ift.tt/6YAC14t January 1, 2026 at 11:05PM
Show HN: Wario Synth – Turn any song into Game Boy version https://ift.tt/qzTZHGr
Show HN: Wario Synth – Turn any song into Game Boy version Search any song, get a Gameboy version. Emulates Nintendo's Sharp LR35902 sound hardware: 2 pulse waves for melody/harmony, 1 wave channel for bass, 1 noise for percussion. Finds MIDI sources, parses tracks, maps to GB roles, resynthesizes with Web Audio. Everything runs client-side. Site: https://www.wario.style Open source: https://ift.tt/6EPXga7 Hobby project, non commercial, so please don't sue me. https://www.wario.style January 1, 2026 at 02:25PM
Subscribe to:
Comments (Atom)