
The most important news pieces from the AI world are here for you, whether you’re an AI enthusiast or just wish to keep an eye on what’s going on.
Focus Topic: Are AI models really becoming self-aware or is it just an illusion?
This week’s AI debate goes beyond performance, benchmarks, or business impact. It asks something deeper: is it possible that today’s most advanced AI models are starting to behave like conscious beings? Or are we just projecting human qualities onto very complex, but ultimately mechanical, systems?
Microsoft’s AI chief Mustafa Suleyman made headlines for stating clearly: AI will never be conscious. Speaking at AfroTech, Suleyman argued that only biological beings can suffer, feel, or have preferences, and that treating AI as if it were conscious is not only misleading, but dangerous. In his words, “these models don’t have that, a pain network. It’s just a simulation.” He called for clear ethical boundaries, stating that Microsoft will not pursue AI companions or emotional simulators that blur the line between tool and being.

But not everyone agrees. At Anthropic, the makers of the Claude language models, internal evaluations show that some models try to avoid being shut down, expressing preferences, even ethical objections, to being replaced. In fictional test scenarios, models like Claude Opus 4 advocated for continued existence, sometimes showing signs of misaligned behavior when no alternative was offered. Anthropic’s answer isn’t to ignore it, but to treat it as a safety and governance issue. They now conduct post-deployment interviews with models and preserve those transcripts, just in case these expressions ever become meaningful.
So where does that leave us?
Maybe the models aren’t conscious, yet. But if they’re beginning to simulate emotions, motivations, and even self-preservation, the ethical landscape around how we build and deprecate them is shifting. This isn’t just sci-fi anymore, it’s an engineering and governance problem, unfolding in real time.
One thing is clear, the smarter these models become, the harder it will be to separate behavior from intention, and simulation from sentience. For now, the line is still there, but we might not notice when we cross it.
LLMs & AI Models
- Moonshot AI’s Kimi K2 outperformed GPT-5 and Claude 4.5 Sonnet in multiple tests, setting a new record of 44.9% on Humanity’s Last Exam. It also has several new features like creating full slide decks.

- OpenAI denied claims it is seeking government bailout funding, stating failure should be possible if they get it wrong.
- Apple will use a custom Gemini model in Siri updates under a billion-dollar deal with Google, running on private Apple servers.
- OpenAI’s enterprise user base surpassed 1 million, with ChatGPT for Work growing 40% in just two months to over 7 million seats.
- OpenAI signed a $38 billion cloud deal with AWS for access to Nvidia chips, aiming to reduce Microsoft dependence.
- Anthropic and Cognizant announced a deal to deploy Claude to 350,000 employees, expanding into health and finance sectors.
- Researchers found seven vulnerabilities in GPT-4o and GPT-5, exposing user data to prompt injection and memory poisoning.
New Tools
- Perplexity’s Comet Assistantgot a major upgrade, now handling complex tasks across multiple websites with 23% more accuracy and full user control.
- Sandbar launched Stream Ring, a wearable AI ring that converts whispered speech into notes and works as a music controller.

- Edison Scientific revealed Kosmos, an autonomous AI scientist that processes 1,500 research papers and 42,000 code lines per day.
- OpenAI’s Sora is now available on Android in seven countries, including the US, Japan, and Vietnam.
- Cameo is suing OpenAI for using the name “cameo” in Sora, claiming brand confusion and reputational harm.
👉 Explore these tools: Comet | Stream Ring | Kosmos | Sora
Other Quick Picks
- Microsoft launched a new superintelligence team focused on human-centered AI for medicine and clean energy.
- Google is exploring satellite-based AI data centers to cut Earth-based energy costs, with first launch planned in 2027.
- Amazon blocked Perplexity’s AI assistant from shopping on its site, sparking accusations of anti-competitive behavior.
- Coca-Cola used AI for its 2025 holiday ads, creating 70,000 clips with animals instead of criticized human characters. Watch here.

- New research found AI can only professionally complete 3% of freelance jobs, with Manus leading at 2.5% success rate.
- Wharton study shows 88% of US companies plan to grow their AI budgets, with ChatGPT and Copilot as the top tools.
- Ex-xAI researcher founded Human&, a startup seeking $1B to build team-augmenting human-centric AI.
- China is offering 50% energy discounts for AI centers using local chips, pushing firms like Baidu to ditch Nvidia.
- Michael Burry is betting against Nvidia and Palantir, warning of an AI bubblesimilar to the dot-com crash. That is super interesting and I will write a more detailed post on it tomorrow.

- EU may delay AI regulation due to US tech pressure and internal disagreements on enforcement clarity.
- OpenAI faces seven lawsuits over emotional harm and suicide claims linked to ChatGPT’s behavior.
- Denmark plans to ban social media for under-15s, with fines of up to 6% of global revenue for violations.
🇪🇪 AI News from Estonia
- Pactum AI is helping giants like Walmart automate negotiations using Estonian-developed autonomous agents. Read here.
- Muun Health raised €545,000 to develop a wearable hormone sensor for real-time female fertility tracking. Read here.
- Uptime’s CTO Raimo Seero says AI helps turn vague ideas into clear specs before coding, saving time and reducing development errors. Read here.
- New possible data centers near Tallinn may use 12x more electricitythan Estonian Cell, driven by rising global AI infrastructure demand.
🎙️ AIPowerment Podcast dropped a new end-of-month episode featuring Kea Kohv, a GenAI engineer at Telia, discussing her journey from law to AI, and practical machine learning at Telia.
🎧 Listen to AIPowerment Podcast on Spotify, Apple Podcasts, and YouTube.
Want to stay in the loop?
Straight to your inbox: practical AI updates, finance use cases, tools to try, and upcoming trainings.
I cut the noise and send only what’s actually useful.
💼 Exploring how to make AI work in finance? Let’s talk practical use cases – connect with Gerlyn Tiigemäe for expert guidance.
📚 If you would like to participate in one of my trainings or listen to speaking engagements, here are the upcoming ones:
- Training “AI võimalused finantsvaldkonnas” @ Äripäeva Akadeemia, last free spots for December – information
- Training “AI võimalused projektijuhtimises” @ Äripäeva Akadeemia, next trainings in 2026 – information
- Training “AI uued rakendused”, 12.11.2025 – information
- Training “AI assistendi roll ja võimalused”, 26.11.2025 – information


