Adsrekt

vip
Age 0.6 Yıl
Peak Tier 0
No content yet
codex working bad 2day
gpt 5.4 overloaded the servers
am I only one with this problem?
post-image
  • Reward
  • Comment
  • Repost
  • Share
someone got a speech model running on an apple watch.
not a toy demo. granite 4.0 1B speech just ranked FIRST on the OpenASR leaderboard.
heres whats wild about it:
• 1B parameters - half the size of granite 3.3 2B
• higher english transcription accuracy than the bigger model
• speculative decoding for faster inference on tiny hardware
• 6 languages - english, french, german, spanish, portuguese, japanese
• keyword list biasing so it actually gets names and acronyms right
the part nobody is talking about:
youre paying for whisper API calls every month while a model half the size of its predece
post-image
  • Reward
  • Comment
  • Repost
  • Share
openai scanned 1.2 MILLION commits in 30 days
they found 10,561 high-severity bugs. 792 criticals.
in projects you probably depend on right now
openssh. gnutls. chromium. libssh. PHP.
these arent hobby repos. these are the foundations your entire stack sits on. and humans missed this stuff for years
codex security doesnt just flag noise like your linter having a panic attack. it builds context across the whole project, validates the finding, then proposes the actual fix
thats the difference between "here are 4000 warnings you will ignore" and "this specific function in openssh lets an attacker
post-image
  • Reward
  • Comment
  • Repost
  • Share
my Codex Limits have been updated for the 5th time this week
any news?
post-image
  • Reward
  • Comment
  • Repost
  • Share
you shipped 5 polished demos. you have ZERO users. the product wasnt the problem.
the marketing was.
i know because i built fcksmm after staring at a blank compose box for months while my projects collected dust.
> "if i build something good enough people will find it"
thats the most expensive lie in the build in public community. good products die quiet deaths every single day.
marketing isnt performative. its distribution engineering. its part of the stack just like your database and your auth layer.
you wouldnt ship without error handling. so why are you shipping without a single post expla
post-image
  • Reward
  • Comment
  • Repost
  • Share
openai dropped GPT-5.4 Pro with ZERO safety evals
no system card. no public risk assessment. nothing.
this is likely the best model in the world right now for biological research R&D, orchestrating cyberoffense operations, and autonomous computer use - and they shipped it like a patch note
this isnt the first time either. GPT-5.2 Pro followed the same pattern. quiet release, no documentation, no external review. theyve done this before and nobody held them accountable so they did it again
openai is now shipping frontier models with less transparency than a weekend fine-tune on huggingface
if y
post-image
  • Reward
  • Comment
  • Repost
  • Share
GPT 5.4 TIPS
stop enabling the 1mil context window. its not ready. its UX theater and its burning your builds alive
i tested the new model across 3 of my own projects. thoroughly. not a vibe check - actual production work
heres what nobody is telling you:
abstract task comprehension got SIGNIFICANTLY better. the model finally understands what youre trying to do before you spell out every step
but that 1mil context window everyone is hyped about? DONT TOUCH IT. its half-baked. shove your context into the project folder and the project settings instead. thats where the model actually reads
enabl
post-image
  • Reward
  • Comment
  • Repost
  • Share
stop building your IDEA. start with your GTM.
i shipped 3 tools this year. the ones that hit $5k MRR had one thing in common - i knew where the traffic was coming from BEFORE i wrote line 1 of code.
the ones that flopped had better architecture.
youre sitting on 5 polished demos right now. ZERO users. zero MRR. and youre still tweaking the agent loop instead of asking one question - where do my first 50 users come from.
not your idea. not your tech stack. not your multi-agent framework.
your startup lives or dies on distribution.
do a resource audit right now. look at what you already have. yo
post-image
  • Reward
  • Comment
  • Repost
  • Share
new gpt 5.4 feels amazing
>gpt 5.3 codex described her actions at length, but ultimately produced terrible code
>gpt 5.2 didn't describe anything at all, but worked better
>5.4 - combined these advantages
gpt 5.4 or Opus 4.6 now ?
  • Reward
  • Comment
  • Repost
  • Share
gpt-5.4 is coming and 80% of the COMPUTE will go toward making sure it doesnt hurt your feelings while the other 20% tries to remember how to reason
  • Reward
  • Comment
  • Repost
  • Share
your ai agent has full root access right now
one hallucinated command is all it takes
> sudo rm -rf /
thats not theoretical
agentic loops generate shell commands from context windows you dont control. the model doesnt need to be MALICIOUS.
it just needs to be wrong once.
post-image
  • Reward
  • Comment
  • Repost
  • Share
your phone knows more about you than your wife does.
a burner paid in crypto has no history, no identity, no leash.
no verification selfie. no linked bank. no 3am anxiety about which exchange just got subpoenaed.
post-image
  • Reward
  • Comment
  • Repost
  • Share
the myth that python "handles memory for you" is why your agents OOM at 4 hours of uptime
ran 24 multi-agents in parallel last month, burning 10x the tokens of a single session for ZERO usable output
the real problem wasnt the tokens though, it was the memory nobody was watching
python uses reference counting plus a cyclic garbage collector. sounds fine until you load numpy arrays through C-extensions that dont decrement refs properly. those objects NEVER get collected. they just sit there, growing, silent
every 100 tokens of context your long-running agent processes, thats another tensor allo
  • Reward
  • Comment
  • Repost
  • Share
your AI is a BLACK BOX and thats why its going to drain your wallet
mechanistic interpretability is how you crack open an LLM and map the actual circuits inside it
not vibes testing
not "it seems to work"
actual neuron-level tracing of how the model implements logic
right now 96% of traffic hitting your endpoints is bots reading raw html
your model is making decisions you cant audit cant trace cant explain
and youre letting it hold keys to real capital
corporate AI safety teams dont understand how their own models work
they wrap it in RLHF and call it aligned
thats not safety thats MARKETING
t
post-image
  • Reward
  • Comment
  • Repost
  • Share
Anthropic revenue run rate just hit $20 BILLION
>$9 billion projected end of 2025
>$14 billion a few weeks ago
>$19 billion now
they added $5 billion in WEEKS
thats not growth thats enterprise adoption at a speed that doesnt make sense unless massive API contracts are landing simultaneously.
post-image
  • Reward
  • Comment
  • Repost
  • Share
i ran 24 multi-agent sessions in codex at the same time
wtf openai?
10x the tokens of a single tab
output was ZERO improvement over one focused prompt
> "emergent behavior from agent-to-agent collaboration"
yeah what emerged was my bill
youre paying for agents to talk to each other, not to produce anything
agent A summarizes for agent B who reformats for agent C who passes it to agent D who outputs the same json you couldve gotten from one clean system prompt in 14 seconds
thats not architecture thats a THEATRE of computation
post-image
  • Reward
  • Comment
  • Repost
  • Share
i turned on 24 multi-agents in codex at the same time
it is AWFUL
24 agents running in parallel burning through 10x the tokens of a single session and producing absolutely nothing you couldn't get from one tab and a clear prompt
this is not an agent framework. this is a token furnace with a loading spinner
OpenAI shipped a feature that looks like the future if you squint but the second you actually try to build something with it you realize every agent is just restating the same context to itself over and over, eating your budget alive
"iT's JuSt EaRlY" - cool so i'm paying 10x for a beta that
post-image
  • Reward
  • 1
  • Repost
  • Share
ybaservip:
To The Moon 🌕
OpenClaw USELESS ?
i didn't want to write anything about openclaw without actually using it first
so i used it. built with it. tested workflows. gave it honest time
here's the honest review nobody asked for
rn the whole scene around ai agents feels like a group chat competition between friends. whose agent sounds cooler. whose demo looks slicker. whose screenshot gets more likes
but when you sit down and try to actually SIMPLIFY something with openclaw - replace a step in your workflow or cut a manual process - it doesn't do that
it adds steps. it creates new dependencies. you end up babysitti
post-image
  • Reward
  • Comment
  • Repost
  • Share
most ppl talking about ai agents have never built one
here is the actual architecture rn
tool-calling agent = llm brain + function registry + execution loop
you define tools as structured schemas. the model picks which tool to call and passes args. your runtime executes it and feeds the result back
thats the whole loop. no magic
modern frameworks like langchain or openai function calling handle the routing. cloud ml platforms like vertex or bedrock handle inference scaling so you dont burn cash on idle gpus
qwen 3.5 small models - 0.8B to 9B params - can run tool-calling locally on a single no
  • Reward
  • Comment
  • Repost
  • Share
  • Pin