Ask HN Digests logo

Ask HN Digests

Archives
Subscribe
November 12, 2025

Why your status pages lie, local AI secrets, and the real cost of trust

Here is this week's digest:

Ask HN: Who uses open LLMs and coding assistants locally? Share setup and laptop

Many developers are actively running open-source LLMs and coding assistants locally, primarily for privacy, offline access, and cost control, despite a quality gap compared to cutting-edge cloud models. MacBooks (M-series with 64GB-128GB RAM) and AMD-based desktops with discrete GPUs (like RTX 3090/4000) are popular hardware choices. Key models include gpt-oss-120b (often needing specific inference parameter tuning), Qwen3-Coder-30b, and Gemma3:12b, often managed with Ollama or LM Studio and integrated into VS Code via continue.dev or llama.cpp plugins.

Useful tips include:

  • Hardware optimization: Prioritize unified memory (Apple Silicon) or high VRAM for larger models; use CPU-GPU offloading for MoE models.
  • Performance hacks: Load files into context, then write instructions for quicker inference (KV caching). Optimize gpt-oss with reasoning_effort set to "high" and specific top_k, top_p, temperature settings.
  • Workflow enhancements: Utilize Open WebUI for a browser interface, Aider for agentic coding, and llama.cpp for fast, binary deployment. For code completion, specialized smaller models like Qwen3-Coder-30b (active 3B params) or gpt-oss-20b offer fast, low-latency responses, useful for quick queries and boilerplate generation. Some users build custom RAG systems for local documentation or leverage Coderunner for sandboxed code execution.

Read more

Ask HN: Would you trust an AI that sees and hears everything you do?

The discussion revolves around trusting an AI that constantly sees and hears everything you do, particularly in a wearable form factor. While strong opposition cites deep privacy concerns, the potential for surveillance, and a fundamental distrust of corporations and governments, some participants explore conditions under which such an AI might be acceptable.

Key takeaways and productive arguments include:

  • Privacy First: Many users strongly oppose the concept due to fears of personal data exploitation, loss of autonomy, and the dystopian implications of constant monitoring. They advocate for actively avoiding such technology and those who use it.
  • Verifiable Trust is Paramount: For any consideration, conditions proposed include local-only data storage (no cloud servers), a fully functional offline mode, and open-source software (especially the OS and data pipeline) to ensure transparency and auditability.
  • Granular Control and Opt-in: The system should default to rejecting all data, only capturing or processing what the user explicitly enables, similar to wake-word detection, rather than recording everything and filtering later.
  • Ethical Design: The project creator acknowledges the pervasive lack of trust and aims for an architecture where trust is verifiable through design rather than promises, focusing on user control and responsible development to prevent misuse.

Read more

Ask HN: Where to begin with "modern" Emacs?

For Neovim users exploring "modern" Emacs and aiming to own their configuration, the community offers two main paths: building from vanilla Emacs or using a lightweight starter kit like Bedrock or minimal-emacs.d. While distributions like Doom Emacs provide a feature-rich, Vim-like experience, they can obscure the underlying system.

Key packages for a modern setup include the completion stack (Vertico, Consult, Marginalia, Orderless, Corfu), modal editing (Evil-mode or Meow), and powerful tools like Magit for Git and Eglot/lsp-mode for LSP support. Essential learning resources include Systemcrafters' "Emacs from Scratch," Mickey Petersen's Mastering Emacs, Sacha Chua's Emacs News, and the built-in Emacs tutorial and manuals. Ergonomic advice centers on remapping Caps Lock to Control.

Read more

Ask HN: What are you working on?

Entrepreneurs and developers are actively building a diverse array of projects, often addressing personal pain points or leveraging deep domain expertise. Key themes include the pervasive integration of AI for tasks like structured brainstorming, nursing automation, and fashion search, alongside innovative approaches to user experience and community building. For example, 'Brain Hurricane' uses AI for idea validation, while 'The Kaizen App' offers an unbypassable app blocker for digital well-being. A recurring tip is to solve problems you personally experience and to gather specific user feedback early to guide development.

Read more

Ask HN: Why are most status pages delayed?

Most service status pages often lag significantly behind real-world outages. This delay stems from several factors:

  • Human-in-the-Loop Processes: Major sites require multiple human approvals—from on-call engineers investigating alerts to managers, directors, and even PR/legal teams—before an outage is publicly acknowledged. This escalation can take 20-50 minutes.
  • Business & Legal Incentives: Companies are highly incentivized to delay or downplay outages due to Service Level Agreements (SLAs) which can trigger refunds or even lawsuits. Publicly acknowledging downtime can also negatively impact stock prices, investor confidence, and reputation.
  • Avoiding False Alarms: Automated monitoring, while fast, can produce false positives due to system complexity or transient network issues. Companies prefer human validation to avoid declaring non-existent outages, which also carries significant business costs.

While frustrating for users, these delays are often a calculated business decision rather than a technical oversight. Third-party monitoring services sometimes provide faster alerts than official status pages.

Read more

Don't miss what's next. Subscribe to Ask HN Digests:

Add a comment:

Share this email:
Share on Facebook Share on Twitter Share on LinkedIn Share on Hacker News Share on Threads Share on Reddit Share via email Share on Mastodon Share on Bluesky
Website favicon
X
Powered by Buttondown, the easiest way to start and grow your newsletter.