The real reason you're failing tech interviews (it's not your code)
Here is this week's digest:
Ask HN: Is the rise of AI tools going to be the next 'dot com' bust?
The current AI boom shares striking similarities with the dot-com era: both are seen as transformative technologies attracting massive investment before clear 'killer apps' emerge. A key parallel is the 'picks and shovels' investment in hardware providers, comparing NVIDIA today to Cisco in the 90s. However, key differences suggest any correction may not be a catastrophic 'bust'.
Unlike the many unprofitable startups of the 90s, today's AI race is led by profitable tech giants. Furthermore, AI is viewed as a fundamental productivity tool with immediate cross-industry applications, not just a new consumer channel. A likely outcome is not a crash, but a market 'shake-up' or 'deflation' that weeds out unsustainable business models, especially thin API wrappers. A potential trigger for this correction could be the high cost of AI; as companies raise prices to cover hardware and energy expenses, customers may pull back if the return on investment isn't clear.
Ask HN: How do you tune your personality to get better at interviews?
A developer struggling with job rejections despite strong technical skills sparked a discussion on the often-overlooked aspects of interviewing. The most impactful advice centered on the 'digital footprint check.' A commenter discovered the original poster had a history of online posts complaining about employers and expressing burnout, suggesting this was the likely reason for rejection after multiple interviews. This led to a broader conversation about the importance of curating your online presence, with actionable tips like using AI tools (e.g., Perplexity) to audit what recruiters can find about you.
Other key takeaways include:
- Communication Style: Being long-winded or rambling is a common reason for rejection. Practice being concise and clear. Mention you're aware of edge cases, but let the interviewer ask for details rather than listing them all.
- Handling Disagreement: When an interviewer incorrectly 'corrects' you, how you respond is a test of your humility and teamwork skills. Frame it collaboratively, e.g., "Let's dig deeper into that in case I've missed something in my approach."
- Mock Interviews: Don't guess what you're doing wrong. Get direct, honest feedback from friends, mentors, or paid services to identify blind spots in your interviewing style.
- Market Reality: In a competitive market, rejection isn't always personal. It could be due to internal budget changes, a slightly better candidate, or sheer bad luck. Resilience is key.
Ask HN: Do you still bookmark websites?
While dedicated bookmarking services have come and gone, the practice of saving links is far from dead—it has simply evolved. Many still rely on their browser's native bookmarking for its simplicity and built-in sync. However, a common pain point is the "digital graveyard" phenomenon, where services like Instapaper or Pocket become a place for links that are saved but never revisited, a behavior some call digital Tsundoku.
Key takeaways from the community include:
- Enhancing Browser Bookmarks: Use extensions to add powerful search capabilities to your native browser bookmarks.
- Modern Tools: Services like Raindrop.io are popular for their robust features and user interface.
- Self-Hosting for Control: For privacy and longevity, many turn to self-hosted solutions like Linkding, Wallabag, and Karakeep.
- Bookmarks as Knowledge: A growing trend is to integrate links into note-taking apps like Obsidian. This workflow treats a bookmark as a citation within a larger personal knowledge base, adding context and ensuring the content is archived and searchable.
Ask HN: Are you comfortable uploading sensitive data to ChatGPT or Gemini?
A discussion on AI and data privacy reveals a strong consensus against uploading sensitive information to services like ChatGPT or Gemini. The primary alternative recommended is running open-source Large Language Models (LLMs) locally for complete privacy.
Key takeaways and recommendations include:
- Local AI Software: Popular tools for running models on your own machine are LM Studio, Ollama, and
llama.cpp, which is noted for its performance. - Recommended Models: For local use, users suggest models like
Phi-4for writing,Qwen3for coding, and smallerMistralmodels for classification. Models likeqwen3:30bare considered powerful enough for many tasks and can run on modern CPUs without a high-end GPU. - Hybrid Approach: If you must use a cloud AI, a useful trick is to first anonymize your data using a local LLM before uploading the sanitized version.
- Corporate Use: For professional work, the advice is to stick to company-approved, enterprise-grade services (like Microsoft Azure's AI offerings) that come with data confidentiality agreements.
Ask HN: Have any successful startups been made by 'vibe coding'?
The consensus is that building a successful startup purely through 'vibe coding'—especially for non-programmers—is not yet realistic. The primary challenge is scale; as a codebase grows, natural language prompts become more complex and less efficient than writing the code directly. LLMs struggle with maintaining context, leading to bugs and hallucinations in larger projects. Security is another major concern, with some vibe-coded apps reportedly having significant vulnerabilities.
However, experienced developers are finding ways to use AI as a powerful assistant. Effective strategies include:
- Systematic Prompting: Employing a disciplined process of generating code, tests, and then using further prompts to refactor and clean the output.
- Strict Guardrails: Using tools like TypeScript and strict linting rules to help the AI generate higher-quality code.
- Architectural Patterns: Breaking down problems into smaller, discrete components (like microservices), though this can shift complexity from the code to the architecture.
Finally, a significant portion of the 'vibe coding' success stories appear to be part of a 'get rich quick' trend, where individuals create simple AI-wrapper apps, exaggerate their revenue, and then sell courses on how to do the same.
Add a comment: