A summary of all the bad things about AI. These are legit concerns as new technology comes on the scene, but that's the price that is paid to explore it and develop it. I would not want things like energy consumption, etc to stop researchers from seeing how far this technology can take us. Even that Citrini regrets the post that brought AI stocks down recently. One guy suggests a treatment for AI derangement syndrome: just start using it. Play around with it, and get some experience with it. Your fears will melt away.
And if you're a SaaS company, you have to think differently. Don't just build software. Build customer-changing outcomes.
Vibe-coding is not that easy. Ask this hapless software engineer who dissed his wife to code something ambitious. Ouch.
If you still think that AI is going to eliminate software engineering jobs, look no further than Anthropic, who is hiring for software engineers. You'd think they, of all people, could just vibe code it themselves, right?
Lots of AI news lately. Here are just a few.
Anthropic unveiled a tool that can translate COBOL code to modern languages. So IBM stock is collapsing 25% and counting. Who knew that much of IBM's core business was COBOL maintenance. Seems like they brought it on themselves, no?
So coders are wondering if it still makes sense to code for HTML inputs if agents are going to provide the input anyway. That would be sad. The Internet is changing, and not in a way that's good for humans.
Here's the head of AI safety and alignment at Meta letting OpenClaw delete her inbox. Do you still have confidence in Meta? I think it's like this for others. They don't know what they're doing.
You spend a few minutes defining a task. Clear intent, clear scope, clear success criteria. You hand it to the AI and it starts working. But you’re not going to sit there watching a cursor move. So you jump to the next task. Set it up, define intent, delegate. Then the next one.
Three or four parallel streams running at once. You feel productive. You feel like you’ve cracked the code.
Then the first task finishes. You switch back to it. But now you need to rebuild the context. What was I trying to do here? What approach did it take? Does this actually solve the problem or just pass the tests? You evaluate, adjust, redirect. Then the second task finishes. Context switch. Evaluate. Redirect. The third one hits an error. Context switch again.
These problems are like distant locations that you would hike to. And in the past, you would have to go on a journey. You can lay down trail markers that other people could follow, and you could make maps.
AI tools are like taking a helicopter to drop you off at the site. You miss all the benefits of the journey itself. You just get right to the destination, which actually was only just a part of the value of solving these problems.
Anthropic and the Pentagon are wrangling about the use of AI. This could be very lucrative for Anthropic, especially at a time when all the AI providers are looking for cash. I know that Anthropic wants to be sanctimonious, but do they really think that taking that stance is going to prevent misbehavior from other parties later on? (Very similar to all the climate change restrictions imposed on society.)
Here's a guy who was able to get ChatGPT and Google Gemini to promulgate his homemade falsehoods, as an experiment. Anthropic wasn't fooled, which may be why they are the Pentagon's top choice. This will greatly lessen the usability and trustworthiness of the frontier models.
Perplexity needs money, too, and they were going to incorporate ads, but decided to drop that. Wise move.
And it looks like that jaw-dropping Seedance video of Brad Pitt and Tom Cruise fighting was just green screen foolery. Their AI isn't that good. It was just AI rotoscoping.
Replacing AI with humans is beginning to backfire.
The first signs of burnout are coming from the people who embrace AI the most. Jevon's Paradox effect strikes again.
A doctor wants to train AI to do her job. She must have a really simple job because AI is not there yet. This is going to be a disappointment. But then, this is a CNN article, so don't expect real journalism.
Nice review on LLM reasoning failures. The advanced models are failing logic tests. Yeah, we aren't there yet.