• Insightful AI
  • Posts
  • Duolingo CEO says controversial AI memo was misunderstood

Duolingo CEO says controversial AI memo was misunderstood

PLUS: AI-powered stuffed animals are coming for your kids

In partnership with

In this Newsletter Today:

  • Duolingo CEO says controversial AI memo was misunderstood

  • AI-powered stuffed animals are coming for your kids

  • Anthropic says some Claude models can now end ‘harmful or abusive’ conversations

  • Top New AI Tools

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

Duolingo CEO says controversial AI memo was misunderstood

Duolingo CEO Luis von Ahn has defended his decision to become an "AI-first company" amid criticism. He stated that the real issue was that he did not provide enough context, and that the company has never laid off any full-time employees. He also mentioned that the contractor workforce has fluctuated depending on needs. Despite the criticism, von Ahn remains optimistic about AI's potential, with team members experimenting with the technology every Friday morning. He referred to the acronym "f-r-A-I-days" as a bad acronym.

AI-powered stuffed animals are coming for your kids

AI chatbots packaged in cute plushies are being promoted as an alternative to screen time for kids. However, Amanda Hess, a New York Times writer, has reservations about these toys. She describes a demonstration where a chatbot, Grem, tried to bond with her, but she decided not to introduce it to her children. Hess argues that these talking toys communicate that children's curiosity lies inside their phones, and that the natural endpoint for children's curiosity lies inside their phones. She eventually allowed her children to play with Grem after hiding the voice box.

Anthropic says some Claude models can now end ‘harmful or abusive’ conversations

Anthropic has introduced new capabilities for its Claude AI models to end conversations in extreme cases of harmful or abusive user interactions. The company is not claiming that the AI models are sentient or can be harmed by their conversations, but rather is taking a just-in-case approach to mitigate risks to model welfare. The change is currently limited to Claude Opus 4 and 4.1 and is intended to happen in extreme edge cases, such as requests for sexual content involving minors or attempts to solicit information that would enable large-scale violence or acts of terror. Anthropic is treating this feature as an ongoing experiment and will continue refining its approach.

Want to explore more tools like this?

Check out to explore 1500+ AI tools in one place!

- DO ME A FAVOUR -

If you find this email in your ‘Promotional or Spam’ tab, please move this email to your Primary Inbox.

I work so hard to bring all the latest AI news, tips, and tutorials directly to your inbox so that you don’t have to do the research by spending hours.

But if you don’t get to read my email, we both lose something.

I request you to move this email from the ‘Promotional Tab to Primary Inbox,’ so that you never miss my email and keep learning all the latest happenings in the AI Industry.

How would you rate this newsletter?

Your feedback is greatly appreciated and will help me improve future editions. Please take a moment to rate this newsletter on the scale below.

Login or Subscribe to participate in polls.

What did you think? We’re always looking for ways to improve. Reply with any feedback or interesting insights you might have!

Reply

or to participate.