🎓 AI Just Got a PhD in Chaos

PLUS: AI Hacking Gmail ?

Hello and Welcome Back, Neural Squad.

OpenAI’s new AI brain just got a PhD in doing your job (RIP hustle culture?), hackers are cloning voices to scam your grandma via Gmail, and Anthropic hired robot bouncers to stop chatbots from blabbing nuclear recipes. Oh, and Google’s AI swears Gouda rules the world—spoiler, it’s wrong. (Thanks, robot cheese sommelier.)

Here’s what you need to know about Todays Briefing:

  • OpenAI Drops ‘Deep Research’ Agent: Your New AI Brain Just Leveled Up (RIP Google?)

  • Gmail’s 2.5B Users Hit By AI-Powered ‘Super-Phish’ Attack

  • AI Jailbreak Panic: Anthropic’s New ‘Constitutional’ Shields Aim to Stop Chatbots from Going Rogue

  • Google’s AI Gets Cheesed Off: Super Bowl Ad Slips on Gouda Stat, Sparks Meltdown

OpenAI Drops ‘Deep Research’ Agent: Your New AI Brain Just Leveled Up (RIP Google?)

Key Points:

  • OpenAI’s new Deep Research mode (powered by its o3 model) acts like a turbocharged research assistant, churning out expert-level reports in minutes.

  • Saves users hours/days of work and hundreds of dollars—perfect for finance, science, and even personal crises (like cancer treatment decisions).

  • Crushed the “Humanity’s Last Exam” benchmark (26.6% accuracy, which is way higher than humans or other AI).

  • Rolling out to ChatGPT Pro users ($200/month) first, with plans for Plus, Team, and Enterprise tiers.

The Summary: Imagine if your homework helper could not only find all the answers but also write a whole report, with citations, while you nap. That’s OpenAI’s new Deep Research agent! It’s like ChatGPT’s older sibling who went to grad school. This AI scours the internet, reads everything, and spits out super-detailed reports—whether you’re a scientist, a marketer, or just trying to pick the best snowboard.

Oh, and it’s not just for work: One OpenAI employee used it to help decide if his wife needed radiation therapy after breast cancer surgery. The AI dug up medical studies even their doctors hadn’t mentioned. But don’t trust it blindly—it sometimes hallucinates (makes stuff up), so double-check those sources!

Why it matters: This isn’t just another chatbot. Deep Research shows AI is evolving from answering questions to doing expert-level jobs. Think: Lawyers, analysts, and doctors might soon have AI coworkers (or rivals?). It’s a big step toward OpenAI’s goal of AGI (Artificial General Intelligence)—AI that can learn and discover like humans.

But uh-oh: If a $200/month tool can replace hours of paid research, industries are in for a shakeup. Also, the ethics of AI medical advice? Yikes. For now, enjoy the free time—just don’t let your boss know you outsourced your job to a robot.

TL;DR: Say goodbye to all-nighters. And maybe to your job. 🚀🤖

Gmail’s 2.5B Users Hit By AI-Powered “Super-Phish” Attack—Here’s WTF Happened

Key Points:

  • Hackers used AI-generated voices (American accents, “Google support” calls) + real Google domains to trick users into handing over credentials.

  • One victim called it “the most sophisticated phishing attack ever”—nearly fooled a hacking club founder and a security consultant.

  • Google’s Advanced Protection Program (APP) and passkeys are the ultimate shields: even if hackers get your password, they need your physical device + biometrics to break in.

  • Google suspended the scammer’s account but warns: AI attacks are evolving FAST. Stay paranoid.

The Summary: Imagine getting a call from “Google Support” saying your account’s hacked. The person sounds friendly, has a clear American accent, and even sends emails from real Google addresses. They tell you to reset your password with a code they send—but it’s a trap! This is called “phishing,” where bad guys pretend to be someone you trust to steal your info. But now, hackers are using AI to make their voices and emails super convincing. Even tech experts almost fell for it! Google has 2.5 billion users, so this isn’t a small problem. Think of it like a robot thief copying your mom’s voice to trick you into opening the door. Scary, right?

Why it matters: AI is turning hackers into Hollywood-level scammers. These attacks aren’t just “click this sketchy link” anymore—they’re personalized, smooth-talking nightmares. If even hacking pros get duped, regular folks are sitting ducks. Google’s pushing passkeys and APP (think of them as unbreakable locks), but most people don’t use them. Bottom line: Your Gmail is a treasure chest (emails, photos, passwords!), and AI tools are giving burglars master keys. Time to level up your defenses—or risk becoming the next “oh crap, they got me” story.

AI Jailbreak Panic: Anthropic’s New ‘Constitutional’ Shields Aim to Stop Chatbots from Going Rogue

Key Points:

  • Anthropic unveiled a new AI safety system called “constitutional classifiers” to block harmful content from chatbots like Claude.

  • The system stopped 95% of jailbreak attempts (vs. 14% without it), but adds a 24% cost bump to run the models.

  • Tech giants like Microsoft and Meta are also racing to build similar safeguards to avoid regulatory headaches.

  • Red teamers spent 3,000+ hours trying to break the system, with bounties up to $15,000 for successful jailbreaks.

The Summary: AI chatbots like Anthropic’s Claude or OpenAI’s ChatGPT are super smart, but they can sometimes be tricked into saying dangerous or illegal things. This trickery is called “jailbreaking.” For example, someone might ask the AI to pretend to be a grandma telling a bedtime story about making chemical weapons. Yikes! To stop this, Anthropic created a new safety layer called “constitutional classifiers.” Think of it like a bouncer at a club, checking who gets in and what they say. Other companies, like Microsoft and Meta, are also building similar tools to keep their AIs safe and out of trouble.

Why it matters: AI is everywhere now, from helping with homework to running businesses. But if bad actors can easily trick these systems into spitting out harmful info, it’s a big problem. Anthropic’s new safety tech is a step toward making AI safer for everyone. However, it’s not cheap—adding these protections costs more money and computing power. Still, it’s a small price to pay to keep AI from going full supervillain. Plus, it might help companies avoid getting slapped with heavy regulations. Win-win? Maybe. But the AI arms race is just heating up!

Google’s AI Gets Cheesed Off: Super Bowl Ad Slips on Gouda Stat, Sparks Meltdown

The Summary: Imagine you’re making a poster for your lemonade stand, and your robot helper writes, “This lemonade cures dragons!” Sounds awesome… but it’s totally fake. That’s kinda what happened here. Google used its fancy AI, Gemini, in a big Super Bowl ad to show how small businesses can use AI. But when the AI told a cheese shop in Wisconsin that Gouda is the world’s most-eaten cheese (50-60%!), experts were like, “NOPE.” Gouda’s popular in Europe, but places like India eat way more paneer, and lots of countries love fresh cheeses. The problem? AI isn’t always right, and if businesses trust it for facts, they might spread silly mistakes. Google says, “Hey, we found the stat online!” but even the ad’s tiny text admits it’s just “creative writing.” So… maybe don’t let robots write your homework?

Why it matters:

AI’s sneaky errors (called “hallucinations”) aren’t just funny memes — they’re a big deal if businesses rely on them for real work, like website info. Google’s ad stumble shows even tech giants can’t fully fix AI’s “make-believe” mode yet. Plus, Google’s pushing AI into tools like Docs and Gmail while raising prices, so users might wonder: “Am I paying more for a robot that invents cheese facts?” Trust in AI is fragile, and this cheesy mess reminds everyone to double-check the bot’s work… or risk looking like a melted grilled cheese.

Quick Briefing.

  • OpenAI releases "o1" model for advanced reasoning through chain-of-thought processing

  • EU AI Act Implementation Begins

  • Microsoft is forming a new unit to study AI’s impacts

AI meme of the day.

Your opinion matters!

Your feedback helps me create better emails for you!

Got hot takes? We’re all ears (and so are the AIs, probably). Reply to this email with your answers—we’ll shout back at as many as we can!

  1. OpenAI’s Deep Research Agent: Would you trust an AI to handle your job research? Or is “robot coworker” just code for “soon-to-be-unemployed”? 🧐

  2. Gmail’s Super-Phish Attack: Ever gotten a scam call so slick you almost fell for it? Spill the tea—we’ll judge lightly. ☕

  3. Anthropic’s AI Bouncers: Would you pay 24% more for “ethical” AI? Or is that just Silicon Valley’s way of saying “oops, our bad”? 💸

  4. Google’s Cheese Meltdown: Fact-check: Have you ever caught an AI hallucinating something this ridiculous? (Bonus points for cheesy puns.) 🧀

Hit reply and let’s gossip like we’re in a group chat.P.S. If you don’t reply, we’ll assume you’re a bot… or still crying over the Gouda lie. 😉

Thanks for reading.

Until next time!

Erfan and The Neural Brief team :)