The Powers That Be
An invisible hand is pulling the levers from the cover of a warped democracy.
2026 in AI started with billions in funding, humanoid robots, and—unsurprisingly—ethical controversies. OpenAI just closed a historic funding round of $110 billion, backed by Amazon, Nvidia, and SoftBank, consolidating its position as the leader in the race toward AGI. This is to accelerate the development of advanced models and add infrastructure, but it also raises questions about the concentration of power in AI.
One might wonder: why so much money and no product?
It’s hard not to notice that these same entities are politically connected, pulling strings at the highest levels. The U.S. economy put all its chips on AI (pun intended), betting heavily that it will deliver tangible results in the near future. For now, though, it’s just a cycle of investments among the same players: Nvidia invests in OpenAI, OpenAI uses that money to buy Nvidia chips and buy cloud from Oracle, and Oracle, in turn, buys chips from Nvidia. Similarly, Google, Amazon, and Anthropic have their own cyclical deal. It’s hard not to see this as a bubble forming, considering there is no marketable product yet to justify this level of investment.
Or, there could be another endgame scenario.
We’ve grown accustomed to viewing AI in a democratic way. After all, almost everyone can access it. It’s easy to believe that this is a technology for the people. But in reality, it primarily benefits capital owners—those already rooted in the industry. Not coincidentally, these are the same entities pouring massive cash into AI: Microsoft, Amazon, Nvidia, Tesla, etc. These players stand to gain enormously from advanced AI, across every level—from the factory floor to the CEO’s office. AI could drive such efficiency, that running a billion-dollar company might soon require just 10k employees instead of 100k.
The outcome: massive gains for shareholders. But also massive layoffs, soaring unemployment, social unrest—even revolution. And what’s the point of driving such efficiency if the consumer is unemployed and can no longer afford the very products you’re selling?
There’s even a darker side. The players who control AI development funding—and hence set its direction—also control all the major free tools you use. By using them, you willingly put your data in, so they control you as well, indirectly through your digital persona. Knowing you gives them power over you.
The potential for subliminal manipulation is massive. Algorithmic platforms—the ones we used to call "social"—can be influenced to push content that favors one side over another. Entire cohorts can be isolated from the broader population and fed tailored content designed to influence their voting behavior.
For example, a group of young males eligible for military conscription could be carved out from the larger pool of males and fed manipulative content suggesting that the opposing political group wants to start a war. Such a campaign would be highly effective because it targets the exact demographic most impacted by the decision.
Adding AI to a fully connected world means the cost of running an authoritarian regime has plummeted by orders of magnitude. People willingly surrender their information through social media, making surveillance almost effortless. A modern Gestapo wouldn’t need to knock on doors to find you—your location is right there in your Instagram post. They wouldn’t need to dig for your deepest fears—ChatGPT already knows them. They wouldn’t even need to tap your phone—your opinions are already there in your comments. Everything is already out there.
Aggregate all of this information with AI and you can predict how masses move.
Have AI carefully craft and target specific content—you can control how masses move.
Hence, whoever controls AI, controls the narrative.


Awesome! Keep em coming!