Breaking News
Saturday, February 14, 2026
Show HN: Azazel – Lightweight eBPF-based malware analysis sandbox using Docker https://ift.tt/ph4fdxQ
Show HN: Azazel – Lightweight eBPF-based malware analysis sandbox using Docker Hey HN, I got frustrated with heavy proprietary sandboxes for malware analysis, so I built my own. Azazel is a single static Go binary that attaches 19 eBPF hook points to an isolated Docker container and captures everything a sample does — syscalls, file I/O, network connections, DNS, process trees — as NDJSON. It uses cgroup-based filtering so it only traces the target container, and CO-RE (BTF) so it works across kernel versions without recompilation. It also has built-in heuristics that flag common malware behaviors: exec from /tmp, sensitive file access, ptrace, W+X mmap, kernel module loading, etc. Stack: Go + cilium/ebpf + Docker Compose. Requires Linux 5.8+ with BTF. This is the first release — it's CLI-only for now. A proper dashboard is planned. Contributions welcome, especially around new detection heuristics and additional syscall hooks. https://ift.tt/Yr56OCz February 14, 2026 at 11:07PM
Friday, February 13, 2026
Show HN: I speak 5 languages. Common apps taught me none. So I built lairner https://ift.tt/xOdbQRq
Show HN: I speak 5 languages. Common apps taught me none. So I built lairner I'm Tim. I speak German, English, French, Turkish, and Chinese. I learned Turkish with lairner itself -- after I built it. That's the best proof I can give you that this thing actually works. The other four I learned the hard way: talking to people, making mistakes, reading things I actually cared about, and being surrounded by the language until my brain gave in. Every language app I tried got the same thing wrong: they teach you to pass exercises, not to speak. You finish a lesson, you get your dopamine hit, you maintain your streak, and six months later you still can't order food in the language you've been "learning." So I built something different. lairner has 700+ courses across 70+ languages, including ones that Duolingo will never touch because there's no profit in it. Endangered languages. Minority languages. A Turkish speaker can learn Basque. A Chinese speaker can learn Welsh. Most platforms only let you learn from English. lairner lets you learn from whatever you already speak. We work together with some institutes of endangered languages to be able to teach them on our platform. It's a side project. I work a full-time dev job and build this in evenings and weekends. Tens of Thousands of users so far, no ad spend, no funding. I'm not going to pretend this replaces living in a country or having a conversation partner. But I wanted something that at least tries to teach you the language instead of teaching you to play a language-themed game. Happy to answer anything. https://lairner.com February 13, 2026 at 07:11PM
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills https://ift.tt/v8Ukay3
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills Hey HN. I'm Fabien, principal engineer, 25 years shipping production systems (Ruby, Swift, now Rust). I built Moltis because I wanted an AI assistant I could run myself, trust end to end, and make extensible in the Rust way using traits and the type system. It shares some ideas with OpenClaw (same memory approach, Pi-inspired self-extension) but is Rust-native from the ground up. The agent can create its own skills at runtime. Moltis is one Rust binary, 150k lines, ~60MB, web UI included. No Node, no Python, no runtime deps. Multi-provider LLM routing (OpenAI, local GGUF/MLX, Hugging Face), sandboxed execution (Docker/Podman/Apple Containers), hybrid vector + full-text memory, MCP tool servers with auto-restart, and multi-channel (web, Telegram, API) with shared context. MIT licensed. No telemetry phoning home, but full observability built in (OpenTelemetry, Prometheus). I've included 1-click deploys on DigitalOcean and Fly.io, but since a Docker image is provided you can easily run it on your own servers as well. I've written before about owning your content ( https://ift.tt/U1O8uR2 ) and owning your email ( https://ift.tt/k6zLqY0 ). Same logic here: if something touches your files, credentials, and daily workflow, you should be able to inspect it, audit it, and fork it if the project changes direction. It's alpha. I use it daily and I'm shipping because it's useful, not because it's done. Longer architecture deep-dive: https://ift.tt/WbAof4D... Happy to discuss the Rust architecture, security model, or local LLM setup. Would love feedback. https://www.moltis.org February 12, 2026 at 11:15PM
Thursday, February 12, 2026
Show HN: What is HN thinking? Real-time sentiment and concept analysis https://ift.tt/pAJ5er1
Show HN: What is HN thinking? Real-time sentiment and concept analysis Hi HN, I made Ethos, an open-source tool to visualize the discourse on Hacker News. It extracts entities, tracks sentiment, and groups discussions by concept. Check it out: https://ift.tt/hDIFTgy This was a "budget build" experiment. I managed to ship it for under $1 in infra costs. Originally I was using `qwen3-8b` for the LLM and `qwen3-embedding-8b` for the embedding, but I ran into some capacity issues with that model and decided to use `llama-3.1-8b-instruct` to stay within a similar budget while having higher throughput. What LLM or embedding would you have used within the same price range? It would need to be a model that supports structured output. How bad do you think it is that `llama-3.1` is being used and then a higher dimension embedding? I originally wanted to keep the LLM and embedding within the same family, but I'm not sure if there is munch point in that. Repo: https://ift.tt/D7b0N6O I'm looking for feedback on which metrics (sentiment vs. concepts) you find most interesting! PRs welcome! https://ift.tt/EbVzdAC February 12, 2026 at 11:27PM
Show HN: rari, the rust-powered react framework https://ift.tt/txjAUQN
Show HN: rari, the rust-powered react framework https://rari.build/ February 12, 2026 at 11:15PM
Wednesday, February 11, 2026
Show HN: Agent framework that generates its own topology and evolves at runtime https://ift.tt/31itnaG
Show HN: Agent framework that generates its own topology and evolves at runtime Hi HN, I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they sleep. They want services, not tools. Existing agent frameworks (LangChain, AutoGPT) failed in production - brittle, looping, and unable to handle messy data. General Computer Use (GCU) frameworks were even worse. My reflections: 1. The "Toy App" Ceiling & GCU Trap Most frameworks assume synchronous sessions. If the tab closes, state is lost. You can't fit 2 weeks of asynchronous business state into an ephemeral chat session. The GCU hype (agents "looking" at screens) is skeuomorphic. It’s slow (screenshots), expensive (tokens), and fragile (UI changes = crash). It mimics human constraints rather than leveraging machine speed. Real automation should be headless. 2. Inversion of Control: OODA > DAGs Traditional DAGs are deterministic; if a step fails, the program crashes. In the AI era, the Goal is the law, not the Code. We use an OODA loop to manage stochastic behavior: - Observe: Exceptions are observations (FileNotFound = new state), not crashes. - Orient: Adjust strategy based on Memory and - Traits. - Decide: Generate new code at runtime. - Act: Execute. The topology shouldn't be hardcoded; it should emerge from the task's entropy. 3. Reliability: The "Synthetic" SLA You can't guarantee one inference ($k=1$) is correct, but you can guarantee a System of Inference ($k=n$) converges on correctness. Reliability is now a function of compute budget. By wrapping an 80% accurate model in a "Best-of-3" verification loop, we mathematically force the error rate down—trading Latency/Tokens for Certainty. 4. Biology & Psychology in Code "Hard Logic" can't solve "Soft Problems." We map cognition to architectural primitives: Homeostasis: Solving "Perseveration" (infinite loops) via a "Stress" metric. If an action fails 3x, "neuroplasticity" drops, forcing a strategy shift. Traits: Personality as a constraint. "High Conscientiousness" increases verification; "High Risk" executes DROP TABLE without asking. For the industry, we need engineers interested in the intersection of biology, psychology, and distributed systems to help us move beyond brittle scripts. It'd be great to have you roasting my codes and sharing feedback. Repo: https://ift.tt/u3UEjTJ https://ift.tt/GoPg0EV February 11, 2026 at 11:39PM
Show HN: Unpack – a lightweight way to steer Codex/Claude with phased docs https://ift.tt/5iyBeIG
Show HN: Unpack – a lightweight way to steer Codex/Claude with phased docs I've been using LLMs for long discovery and research chats (papers, repos, best practices), then distilling that into phased markdown (build plan + tests), then handing those phases to Codex/Claude to implement and test phase by phase. The annoying part was always the distillation and keeping docs and architecture current, so I built Unpack: a lightweight GitHub template plus docs structure and a few commands that turns conversations into phases/specs and keeps project docs up to date as the agent builds. It can also generate Mintlify-friendly end-user docs. There are other spec-driven workflows and tools out there. I wanted something conversation-first and repo-native: plain markdown phases, minimal ceremony, easy to adapt per stack. Example generated with Unpack (tiny pokedex plus random monsters): Demo: https://apresmoi.github.io/pokesvg-codex/ Phases index: https://ift.tt/gq3MRws... I’d love feedback on what the “minimum good” phase/spec format should be, and what would make this actually usable in your workflow. -------- Repo: https://ift.tt/BA3g5ue https://ift.tt/BA3g5ue February 11, 2026 at 11:47PM
Subscribe to:
Comments (Atom)