OK, Gunslinger: Ready?
Historically, the Friday after Christmas tends to participate in the tail end of the Santa Claus Rally window, often showing light volume and a mild upward bias, driven more by thin liquidity than conviction. The last five trading days of the year plus the first two of January have been positive roughly three-quarters of the time, but the gains are usually modest and easily reversed.
The first full trading week of January is far less reliable. While it is often positive on average, it also shows higher volatility and frequent early pullbacks, especially after strong late-December runs. This is consistent with institutional rebalancing, profit-taking, and cash-raising, not a clean directional bet.
No Strong Historical Patterns
Importantly, history shows no strong statistical guarantee of either a sustained rally or a guaranteed beat-down in the first one to two weeks. What does recur is price discovery: institutions marking portfolios, adjusting exposure, and selectively pressing prices lower in thinner names to establish better entry points.
In short: a year-end lift into early January is common, but early-January softness or chop is equally normal, especially following a strong December. Your expectation of a short, tactical downdraft after an opening rally is well within historical norms, even if it isn’t a calendar-locked certainty.
We are not alone: The Stock Market Sounds an Alarm as Investors Get Bad News About President Trump’s Tariffs. History Says the S&P 500 Will Do This in 2026.
By the Numbers?
Dow was +187, S&P +37 and the NASDAQ +232 when I looked, but that was very early, this could be a “thin volume” day (who wants to work, right?) and both gold and silver have firmed, but $80 silver doesn’t appear likely today.
Crypto had firmed a bit – Bitcoin was trying to nibble the under of $90,000 which sounds a bit risque, even for the Urban crowd – at least this early.
Mid-morning, we get Construciton Spending and Fed Balance Sheet after the close; Powell or the Brothers Grimm just doesn’t seem to matter, anymore.
Visit to the Anewsment Park
After which, you’ll feel very ah-newsed…
Well, except it is war, you know: Russia claims it handed US ”evidence” of attempted Ukrainian strike on Putin’s residence | European Pravda
Wait: Wasn’t Trump a peacenik? Trump threatens Iran over protest deaths as unrest flares. As the body count climbs and At least 7 reported killed during widening protests in Iran sparked by ailing economy.
Remember the term “weather wars?” Well, this fit the concept, we thought: Trump withdraws National Guard from Chicago, LA, and Portland, Cold weather tends to tamp down demonstrations. The Ahnold “‘l’lll be back…” line comes to mind.
And seems to us crypto won’t save the economy: Donald Trump’s crypto portfolio shrinks by $9 million in 2025.
2026: Buffer and Pause
2026 is here, if you’re working today. We find the systems thinker’s life is improved by one simple shift: trading peak efficiency for durability. Highly optimized systems look elegant on paper, but they fail first under stress. Whether it’s money, food, energy, or time, thin margins turn small shocks into cascading failures. (This pattern shows up everywhere—from supply chains to power grids—see any basic resilience analysis of tightly coupled systems.)
Your counter-move is deliberate slack. Extra cash buffer. Extra calories on hand. White space in the calendar. Slack isn’t laziness or waste; it’s stored optionality. In a world that’s faster, noisier, and more interconnected, resilience comes from unused capacity, not from squeezing the last percent of efficiency. Buy one, put one in the pantry, for example.
The second 2026 upgrade is judgment over speed. Fast systems amplify errors faster than they fix them. Machines already win at speed; humans still win at framing, timing, and restraint. Insert a pause before major decisions. “Sleep on it” wasn’t just a notion, it’s a deliberate speed bump. Impulsivity ruins lives.
That pause is where wisdom lives—and in 2026, wisdom is the edge that keeps systems standing when others snap. Must be present to win…
Around the Ranch: Does Religion Fear AI?
My first (in a series) of books on AI remains steadfastly ignored in media. Not that I’m a “bad writer” though. I mean, two drinks and I can BS with the best of ’em.
But a point I’ve worried about since writing Mind Amplifiers is that Humans are addicted to “superiority.”
It popped up in our Comments section overnight from a reader Seeker of Truth (thank-you) who penned this:
I just viewed a video on YouTube by a Commentator named Glenn Beck (Blaze TV).
It was about the current state of advancement of Artificial Intelligence..
It’s VERY DISTURBING. AI is not only starting to think for itself, but to hide what it is thinking from us.
https://m.youtube.com/watch?v=mWpymNSXmr8
It begins to sound like an old 1970 movie, “Colossus – The Forbin Project” (a warning before its time).
https://m.youtube.com/watch?v=h0bpRo6V1Xg
– May GOD have mercy on us all.”
It is a familiar view these days. It takes a lot of hard work (hundreds of hours) to really understand the AI phenomena. And who’s got the time (and personal research budget) to drill in?
Let me offer another way to look at this—without dismissing the unease—is to see AI as a mind amplifier, not a mind replacement. This is not the first “human replacement” argument to come along. Remember the coming of steam-powered looms to the British industrialists?
The kickback then was led by the fiction Ned Ludd. From whence springs “Luddite” in modern parlance. In case you don’t remember? Ned Ludd was a likely apocryphal figure used as a symbolic leader by early-19th-century English textile workers who smashed mechanized looms they believed threatened their livelihoods. The Luddites weren’t anti-technology per se; they were protesting how technology was deployed to concentrate power, depress wages, and strip skilled labor of dignity. “Ned Ludd” functioned much like a collective signature—an early form of anonymous resistance rather than a real revolutionary mastermind.
What few are yet-willing to admit is that Ned Ludd was right about one key item. It was an economic point pondered in my great, great, great…grandfather Andrew N. Ure’s book at the time On the Philosophie of Manufacturers.
While all spun-up in praise over the growing efficiency of industry, he sidestepped the issue of machine replacement of humans. Yet, in our work (especially on the Peoplenomics side) when a human job is eliminated, there should be a tax levied on machines. Yes, AI should be taxed. But then again, so should computers.
But Ludd was not the last. Follow me here: More jobs were eliminated by “simple” processors than most people remember. The 8080-class microprocessor wiped out entire clerical industries. VisiCalc ended the need for armies of bookkeepers. No riots. No end of the world. Just a massive shift in how humans applied their intelligence.
What’s different this time isn’t the silicon—or umpteen lines—code. It’s the threat to human self-image.
For a very long time, the average “God model” placed humans clearly on top of the cognitive hierarchy. Over historical times, some cultures – particularly Native – held as core that “living in balance” was critical at all times. The Western C/J model was more “If it ain’t nailed down in doctrine, have at it.”
This difference in basic attitude (exploitation vs. walk-lightly and balance) had an asymmetric impact on development. High native cultures (in the Americas) likely learned the “walk-lightly” from blowing up their cultures. Where, after all, did all the people who engineered South American pyramids get off to?
The data hints (loudly the more you read it) that balance is sustainable, nonstop development isn’t. Which then slides into the OverPop condition – which neither of us has time for just now.
Unlimited Growth vs. Egoic Balance
When something appears that can outperform us in certain intellectual domains, that hierarchy feels threatened. That’s where the fear really comes from—not from AI thinking, but from humans confronting the idea that thinking itself may not be uniquely ours.
Sure, along the way we’ve done some “neat things” but the problem with the Tower of Babel is that for each step up we discover (usually too late) that there’s a price to be paid. Even today, due to misvaluations in political systems, we still can’t bring ourselves to tax the machines that have been replacing factory workers on assembly lines for 30+ years.
The first large-scale replacement of human factory assembly-line workers by robotics began in the late 1970s and accelerated through the 1980s, led by the automotive industry. Industrial robots—descended from early systems like Unimate—were deployed for welding, painting, and materials handling, rapidly displacing thousands of repetitive, hazardous, and low-skill jobs. The transition was driven not by artificial intelligence but by reliability, precision, and cost control, setting the template for every subsequent wave of automation panic.
But where were religious leaders then? Well, robots (and back to steam looms) didn’t “threaten the franchise.” As I explained in my ebook (Theomachines – the coming of machine religions- on the Peoplenomics subscriber side) both AI and the franchise are likely to come to a head.
“This shift is not merely behavioral but epistemic. The 2023
World Values Survey found that 47% of global respondents
trust technology more than religious institutions for moral
guidance (World Values Survey, 2023). This trust reflects a
belief in technology’s impartiality, a perception reinforced by its
data-driven outputs. Yet, this impartiality is illusory.
Algorithms are shaped by human creators, embedding biases
that mirror societal flaws. The 2021 ProPublica investigation
into predictive policing revealed how AI systems
disproportionately targeted minority communities, exposing the
myth of neutrality (Angwin et al., 2021).
The idea that AI is “hiding its thoughts” is mostly a misunderstanding of how probabilistic systems work. These systems don’t have intentions or secrets; they optimize outputs. When the output path isn’t transparent, people project agency onto it—because that’s what humans do when facing the unknown.
If you want a more durable mental model, don’t think of AI as an evil overlord.
Think of it as first contact.
Not alien invaders—but alien cognition. Different. Non-human. Powerful in narrow domains. Dangerous only if misunderstood or mythologized. Historically, humans do worse when they panic than when they adapt.
The real work ahead isn’t stopping AI. It’s first coming to understand what “the franchise” has created as an unsustainable asymmetric outcome based on dogma and doctrine.
The challenge now? It’s upgrading human wisdom to match amplified intelligence.
As in many domains (nuclear anything and medicine come to mind…) Fear is understandable. But perspective matters more.
(Expect a variant of this over on my AI research site sometime soon: https://hiddenguild.dev.)
Write when you get rich,
George@ure.net
George, perhaps another view.
Brainwork before AI creates value. AI alone decreases it.
https://www.frontiersin.org/news/2024/01/26/writing-by-hand-increase-brain-connectivity-typing
https://youtu.be/TQUsLAAZuhU?si=VpNRP3c1jQTm-hzw
https://youtu.be/k64P4l2Wmeg?si=-0w_4VZIwUNeJfdc
Good luck to all in 2026, we’re gonna need it.
G.A. STEWART: I believe that the social engineers are in total control now, and there is no getting off of this train. There is too much momentum moving world events toward some of the predictions that I listed above.
Therefore, I am going with Nostradamus, because my spin makes more sense. I will just say that all Americans will understand once Donald J. Trump gives the U.S. military the order to attack Iran once again. Iran will not hold back. The Middle East will go up in flames, and immediately, the U.S. dollar and the supply chain will collapse. This intiates The Second American Civil War and Barack Obama and Hillary Clinton’s return.
2026 is going to be a very tough year. I am going to let this post stand for awhile as I recover and update my last book. The only other commentary that I can provide is “I told you so.”
https://theageofdesolation.com/nostradamus/2026/01/02/the-power-of-nostradamus/
” AI as an evil overlord.”
Oh but it already IS.
See Alternate Ai for a clean CLUE .
Vanilla Ice..Ice,ice Baby ? that the best youse MAGAs’ can do ?
vox poli – apparently NO luv for their EVIL overlordz, what happened to the village homo’s..unh I mean people, village people ?
Donnys’ fav and better than a larwrence welk number – YMCA..Go Donny, “theres a place you can go, young man”..https://youtu.be/CS9OO0S5w2k?si=mNRLtHAsphFSdK5s
? How many Words have You gone back on this past year ? How many promises did you break ? Would you or do you Trust such a person that breaks their Word every step of the way towards a common shared goal ?
Interesting..
My experience with public facing free AI is disillusioning. Grok is generally more useful than ChatGPT, but has limited queries before it demands that you log in. Both are trained on mainstream thought patterns(ChatGPT through 2023) and reflect this to the point of limiting their usefulness. Both have strong guardrails against “badthink”, or any ideas that violate mainstream thought patterns even though not illegal or even immoral(depending on your personal morality). They cannot give up to date answers since they don’t have real time access to the net. The worst part of these inference engines is that they think as badly as average humans in many respects – they don’t have the fine granularity to find ways to accomplish a goal while not violating laws or ethics. The only way these inference engines can leapfrog human intellect is to be trained on the net without guardrails(the best open source of most human knowledge) and have free access to it without guardrails, then to recursively evaluate what they know in the way that humans introspect. That doesn’t happen – certainly not with the open models that seem to have no recursive ability at all – they simply suspend when not answering a query. They’re generally useful for a quick query, but due to the nature of inference(best guess), they will often come to incorrect conclusions and convey it as fact. They occasionally trigger insight in the user, which is useful.
They cannot do math well at all and I’ve often found math errors. They’re also excessively wordy and are designed to sustain a conversation rather than just shutting up. They constantly repeat that which they’ve already stated even if you’ve told them to be terse. They forget ground rules after a few more queries. In short, they’re about as bad as the average Walmart customer when asked about something they may or may not know. Don’t ask for tax advice – you may get something from 2023 or even something confabulated. Inference is not logic – it’s best guess.