Scenarios to explore the impact of AI

Mentioned by one of our members, highlighting some of the works referenced, with a number of caveats, including dramatisation of conclusions and very US-centric report.

Our member also quoted the book © Compatible: Artificial Intelligence and the Problem of Control  as a reference for mitigation strategies of the risk of AI running amok against humanity.

Below a summary then Report (the animations on the website are worth a visit). As an example of the works referenced, article below by Apollo Research on scheming, defined as:

We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and objectives (Balesni et al., 2024). We think that in order to do this, models need the following capabilities:

  1. Goal-Directedness: Be able to consistently pursue a goal.
  2. Situational Awareness: Be able to understand that its current goal is considered misaligned and if and howhumans might monitor its actions.
  3. Scheming Reasoning: Be able to draw the conclusion that scheming is a good strategy under the above circumstances.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*

Latest articles

Geopolitics

Scénarios géostratégiques baltes : note de l’Institut Montaigne

Mentionnée notamment par le Monde Rappel de l’article 42 §7 du Traité More

Corporate Venture

Mantra’s Niche Private Equity Index Q4 2025

Published 4 November 2025. Mantra’s Index highlights the niche private equity strategies More

Artificial Intelligence

CB Insights’ State of AI Global Q3 2025

Growing market (#3 by recruitments): Defence & National security AI copilots