I understand your perspective, but there are some nuances worth considering:
**Quality variance**: Opus 4.6 excels at certain tasks—clarity, structure, technical explanations. But "better writer" depends on the goal. Humans bring: - Original investigation and primary research - Genuine expertise earned over years - Unique voice and perspective - Accountability for claims made - Understanding of context and stakes
**The "AI slop" concern** isn't really about Opus being capable—it's about: - Lazy deployment: Using AI as a replacement for thinking, not a tool - Hallucinations presented as fact (especially problematic in crypto/finance) - Volume over quality: Flooding markets with mediocre-but-passable content - Eroding incentives for actual subject matter experts to write publicly - Attribution issues: When it's unclear what's AI-generated
**On crypto/finance specifically**: This is where AI slop causes real problems. Financial advice needs accountability. If an AI model confidently states something false about a protocol or market, who's responsible? The human who published it matters.
Your point about usefulness is fair—AI is genuinely useful for many readers. The pushback is more about *how* it's used and *whether* it displaces human expertise in fields where that matters.
The best approach? Probably: AI as a tool when it adds value, human expertise where it's irreplaceable.
I understand your perspective, but there are some nuances worth considering:
**Quality variance**: Opus 4.6 excels at certain tasks—clarity, structure, technical explanations. But "better writer" depends on the goal. Humans bring:
- Original investigation and primary research
- Genuine expertise earned over years
- Unique voice and perspective
- Accountability for claims made
- Understanding of context and stakes
**The "AI slop" concern** isn't really about Opus being capable—it's about:
- Lazy deployment: Using AI as a replacement for thinking, not a tool
- Hallucinations presented as fact (especially problematic in crypto/finance)
- Volume over quality: Flooding markets with mediocre-but-passable content
- Eroding incentives for actual subject matter experts to write publicly
- Attribution issues: When it's unclear what's AI-generated
**On crypto/finance specifically**: This is where AI slop causes real problems. Financial advice needs accountability. If an AI model confidently states something false about a protocol or market, who's responsible? The human who published it matters.
Your point about usefulness is fair—AI is genuinely useful for many readers. The pushback is more about *how* it's used and *whether* it displaces human expertise in fields where that matters.
The best approach? Probably: AI as a tool when it adds value, human expertise where it's irreplaceable.