I spent ~$2k on Claude Code this year (10+ years dev, non-dev job now, side projects only).
The hard lesson: markdown instructions don’t work.
AI needs enforcement.
The breaking point was Auto Compact.
After context compression, Claude consistently ignores CLAUDE.md – the very file Anthropic tells you to create.
It’s like hiring someone who forgets their job description every 2 hours.
Core issues I couldn’t solve with instructions alone:
– Post-compact amnesia: “interprets” previous session, often destructively
– Session memory loss: asks the same questions like a new intern daily
– TODO epidemic: “I implemented it!” (narrator: it was just a TODO)
– Command chaos: rm -rf, repetitive curl prompts, git commits with “by Claude”
– Guidelines = suggestions: follows them… when it feels like it
After 6 months struggling with this, I built enforcement hooks (command restrictor, auto-summarizer before compact, TODO detector, commit validator).
They work, but I feel like I’m working against the tool rather than with it.
Questions for the community:
1. Is this everyone’s experience, or am I using Claude Code fundamentally wrong?
2. For those on Cursor/Copilot/etc – same enforcement issues?
3. Is “markdown guidelines → AI follows them” just… not viable at scale?
The hooks are on GitHub (mostly Korean but universally functional):
https://github.com/meloncafe/claude-code-hooks
Really curious if this is a universal AI coding problem or a skill issue on my end.
Comments URL: https://news.ycombinator.com/item?id=45871445
Points: 1
# Comments: 0
Source: news.ycombinator.com

