he’s “not sure the Iranian people really want another conflict.”
When we take on new roles - which we do all our lives, but especially as we figure out how to become adults - we learn by doing and often by doing badly: being too formal or informal with new colleagues, too strait-laced or casual in new situations.
Average wait times at RiverBend jumped to seven hours in 2024 after the University District emergency department closed and have stayed near that level through 2025, according to Oregon Health Authority data.
A summary of all the bad things about AI. These are legit concerns as new technology comes on the scene, but that's the price that is paid to explore it and develop it. I would not want things like energy consumption, etc to stop researchers from seeing how far this technology can take us. Even that Citrini regrets the post that brought AI stocks down recently. One guy suggests a treatment for AI derangement syndrome: just start using it. Play around with it, and get some experience with it. Your fears will melt away.
And if you're a SaaS company, you have to think differently. Don't just build software. Build customer-changing outcomes.
Vibe-coding is not that easy. Ask this hapless software engineer who dissed his wife to code something ambitious. Ouch.
If you still think that AI is going to eliminate software engineering jobs, look no further than Anthropic, who is hiring for software engineers. You'd think they, of all people, could just vibe code it themselves, right?
Lots of AI news lately. Here are just a few.
Anthropic unveiled a tool that can translate COBOL code to modern languages. So IBM stock is collapsing 25% and counting. Who knew that much of IBM's core business was COBOL maintenance. Seems like they brought it on themselves, no?
So coders are wondering if it still makes sense to code for HTML inputs if agents are going to provide the input anyway. That would be sad. The Internet is changing, and not in a way that's good for humans.
Here's the head of AI safety and alignment at Meta letting OpenClaw delete her inbox. Do you still have confidence in Meta? I think it's like this for others. They don't know what they're doing.
You spend a few minutes defining a task. Clear intent, clear scope, clear success criteria. You hand it to the AI and it starts working. But you’re not going to sit there watching a cursor move. So you jump to the next task. Set it up, define intent, delegate. Then the next one.
Three or four parallel streams running at once. You feel productive. You feel like you’ve cracked the code.
Then the first task finishes. You switch back to it. But now you need to rebuild the context. What was I trying to do here? What approach did it take? Does this actually solve the problem or just pass the tests? You evaluate, adjust, redirect. Then the second task finishes. Context switch. Evaluate. Redirect. The third one hits an error. Context switch again.
These problems are like distant locations that you would hike to. And in the past, you would have to go on a journey. You can lay down trail markers that other people could follow, and you could make maps.
AI tools are like taking a helicopter to drop you off at the site. You miss all the benefits of the journey itself. You just get right to the destination, which actually was only just a part of the value of solving these problems.