How some engineers 10x'd their output with AI (and you can too)
Here is this week's digest:
Ask HN: What tech purchase did you regret even though reviews were great?
Many highly-rated tech products often disappoint in daily use. Smart home gadgets, like robot vacuums and smart lights, frequently suffer from reliability issues, short lifespans, and reliance on unstable company services. A key recommendation for smart home tech is to prioritize local control with open standards like Zigbee and Home Assistant over proprietary ecosystems, and consider smart switches instead of smart bulbs for longevity.
Wearables like Apple Watch face battery life challenges, though models like the Ultra or alternatives like Garmin offer better endurance for active users. Keyboards, especially from brands like Matias, are cited for poor build quality, contrasting with more durable options like Apple's Magic Keyboard. Even products like 3D printers, initially seen as underutilized, reveal significant practical value when users embrace 3D modeling and leverage print-on-demand functionality for custom parts and home solutions.
Ask HN: What did you read in 2025?
Readers shared diverse literary journeys for 2025, with popular picks spanning immersive sci-fi series like Dungeon Crawler Carl and Hyperion Cantos, to profound classics such as The Gulag Archipelago and Crime and Punishment. Many also highlighted the surprising depth in children's literature, like Frog & Toad.
Valuable reading strategies emerged, including leveraging library apps like Libby to encourage completion and reduce guilt over abandoning unengaging books. Readers found success by integrating audiobooks into commutes and reclaiming small, idle moments—traditionally spent on social media—for reading. A key takeaway emphasized reading what genuinely interests you over perceived "must-read" lists and actively challenging your own worldview through diverse selections.
Ask HN: By what percentage has AI changed your output as a software engineer?
Software engineers report a highly varied impact of AI on productivity, ranging from significant gains (2x to 10x+, sometimes infinite) to neutral or even negative effects. AI excels at boilerplate, mechanical refactors, and learning new tech stacks. Many find it invaluable for personal projects that wouldn't have been started otherwise. Conversely, AI often struggles with complex business logic, unique architectural decisions, and integration into large, legacy codebases, frequently producing "slop" that requires extensive refactoring or debugging. Key advice includes treating AI as an enhanced learning tool, providing detailed prompts, and rigorously verifying its output to avoid introducing technical debt. Some leverage it to restructure workflows for documentation, project tracking, and even team updates.
Ask HN: Anti-AI Open Source License?
Seeking to open source code while prohibiting AI training presents a fundamental conflict with the established definition of "open source," which disallows use-case discrimination. Any license with such a restriction would typically be classified as "source available," not truly open source, potentially limiting adoption.
The enforceability of anti-AI clauses hinges heavily on ongoing legal disputes regarding "fair use" for AI training; if courts rule it's fair use, license restrictions might be irrelevant for training purposes. Consider that copyleft licenses like GPL/AGPL might pose more direct challenges for AI companies if their models' outputs are considered derivative works and redistributed. Be aware of platform terms, like GitHub's, which may grant broad usage rights for code analysis, potentially including AI training.
Add a comment: