Don’t look too closely:  America may be on computer-assisted policy auto-pilot and that could explain a lot of Trump actions lately.

A number of readers are confused by my outlook on the Trump presidency.  So this being a semi-holiday, I thought it would be a good time to explain what MAY be going on.

If you don’t subscribe to our website, you may not be aware that as long ago as 2001 we postulated the existence of a “super-secret” government bureau that would advise the president on national matters using computational future analysis forcasting.

I even made up a name for the group.  I called it (and still do) Directorate 153.”

In the Peoplenomics archives, you will find about eight articles on D-153 over the years that patiently offer up this “hidden bureau” as the “unseen hand” that explains much – if not most – of the seeming “drift” of presidents once they take office.  Why Bush was like Clinton, why Obama was like Bush, why Trump is drifting toward more liberal.

We have been hard-pressed to get everything related to D-153 (the Computational Future Group/CFG) exactly correct because the agency would not be immediately known, except through changes in presidential behaviors and policies.

In other words, it’s a ‘back-fitting” exercise in mathematical terms.  We get enough hard data spots in a “cloud” in model space and it’s implied that there is more than Trumpian randomness at work.

An example of “how we got things wrong” was in our pre-election coverage of how such a hypothetical group would have briefed the incoming president PRIOR to his taking the Oath of Office.

No, turns out that was wrong.  Apparently, the group has only recently made itself known to Trump and – as a result – his behavior has changed – in some cases dramatically.  And just in the past few weeks, so apparently the “group” doesn’t “reveal” until 60-days into a presidency.

Or, the new team takes that long to “find it.”

No end of confusion on the part of our readers, though.  They are dazed and shaken by our apparent disappointment with the changing Trump:

“George, even you seem different lately in your view of Trump. As though you agree with his flip flops with your lack of criticism (or did I miss them), whereas before you were watching his actions with a squinted eye. Did some men in black visit you? LOL… but seriously, Trump has become the very person Hillary would have been, had she been elected. He’s saying he didn’t change, the facts did and he’s just reacting.”

No, disappointed in Trump is not the central idea.  Disappointed that Directorate 153 may be real?  Oh yeah…

In “George’s Alt-Reality” it is not that Trump has become the person that Hillary would have been.  It’s that he may – in effect – have had no other course of action except to change his behavior.

Let me lay out how Directorate 153 likely came into being.

We can go back to the era of the Cuba Missile Crisis in 1962, then closely follow that with the Vietnam War and the release (at that time) of the :”Report from Iron Mountain: On the accessibility and desirability of peace…”

The book Iron Mountain established the concept of government doing a lot of “futuring” back when it wasn’t even on the table in the public mind.

It’s still not.

At least not in a “run the country” way.  Future myth and comic books, that.

But the existence of a Computational Futures Group would be quite useful both in short-term decision bounding as well as securing the long-term continuation of these United States.  Remember how much has been spent on COG – continuity of government?

It’s not that the Group would exactly tell the president what to do.  Their job would be more like answering specific computational questions about the future.  To which they would respond with timeline decisions and it would all be annotated in such a way as to guide the presidential decision-making processes forward.

On every apparent Trump policy reversal, we see the potential involvement of the CFG/Directorate 153.

Take any crisis – like the current mess with Syria, North Korea, China, and Russia.

If you are trying to make decisions at the “guts poker” level at say the Joint Chiefs, it’s like “anything can happen day.”

As we have discussed many times in past Peoplenomics reports, the operation of small executive groups is often changed based on the amplitude of the voices in the room.  The human with the loudest voice and biggest physical presence will carry their decision to consent more often than the quietest, least imposing figure.

In the art of “computationally based decision-making” certain techniques have evolved which you may not be aware of.  One of our favorites has been the “Delphi Method” which came out in Addison-Wesley Advanced Books in the 1970s if memory serves.

If you Wiki it?

The name “Delphi” derives from the Oracle of Delphi, although the authors of the method were unhappy with the oracular connotation of the name, “smacking a little of the occult”.[9] The Delphi method is based on the assumption that group judgments are more valid than individual judgments.

The Delphi method was developed at the beginning of the Cold War to forecast the impact of technology on warfare.[10] In 1944, General Henry H. Arnold ordered the creation of the report for the U.S. Army Air Corps on the future technological capabilities that might be used by the military.

Different approaches were tried, but the shortcomings of traditional forecasting methods, such as theoretical approach, quantitative models or trend extrapolation, quickly became apparent in areas where precise scientific laws have not been established yet. To combat these shortcomings, the Delphi method was developed by Project RAND during the 1950-1960s (1959) by Olaf Helmer, Norman Dalkey, and Nicholas Rescher.[11] It has been used ever since, together with various modifications and reformulations, such as the Imen-Delphi procedure.[12]

Experts were asked to give their opinion on the probability, frequency, and intensity of possible enemy attacks. Other experts could anonymously give feedback. This process was repeated several times until a consensus emerged.  Members of the group didn’t know who one-another were so the physicality and booming voice were eliminated.  So was rank, political power, and so forth.  Damn fine art.

The gist of it is a survey which looks for consent within the interquartiles.  In other words, poll people, toss out the top and bottom 25% of answers and feed it back to the same group some odd numbers of times.

With the physical presence off the table, the central tendency of the group emerges.

So, no way around it, the Computational Futures Group we hypothecated in 2001 (and have used as an ongoing thought model/tool to better understand the bounds of decision-making) has been extremely useful.

And, if the Directorate 153 group did nothing more than “expert Delphi’s” it would be a very useful tool.

It likely went further.  A hell of a lot further.

Econometric modeling would have entered the equation in the 1980’s when computing horsepower became important as it applied to Cold War era continuity of government thinking.

By the early 1990’s, the feedback from the KG-series of satellites would have given an additional data stream that could forward-project when enemies (or more properly challengers) to the U.S. global dominance would be able to complete work on projects visible from space.

What would Russian recovery time and strength be if we….” – that kind of question.

Then came the arrival of HAARP which allowed for radio tomography to map tunnel systems in places like Afghanistan and, pertinent today, North Korea.  This was a kind of “bonus” on top of the baseline work on weather modification that lead to the “Owning the Weather” once-outlandish concepts outlined in the Air Force 2025 report some years back which one of our…ahem…contributors on military affairs is rather expert in.

Then came the A.I. and A.L. inputs.

A.I. – artificial intelligence – is everyday stuff.  But A.L. – artificial learning – has gotten little public attention.

One of the centers of work in this regard with occasional references and nods in the direction of the U.S. Army War College, was the “Disciple Project” at George Mason University.  Their Learning Agents Center has been up and in operation since 2001 and you can start learning about it here.

Yu can begin to see now the terribly important role yet fairly compact footprint that our “hypothetical” Directorate 153 of Computational Futures Group would have.

With “artificial intelligence” to parse spy satellite, NSA phone and email data, as well as econometric models running when markets are trading, then all this married up into a “learning agent” with some specific goals in mind, well the idea of a president operating within a bounded decision-making setting becomes rather clear.

The most difficult thing to comprehend is that America may already be – for all intents and purposes – under the direction of a super-secret and tightly controlled forward-looking A.I./A.L. platform.

We only owe an apology to Peoplenomics subscribers for suggesting that the existence of such a platform would have been explained to president Trump PIOR to taking office and indeed before the election.  We were wrong.  it comes after..

As I thought about it more and more, letting Trump roll in the “unbounded mode” gave the A.L. component time to assess the Trump personality and how it was – in turn – viewed by major media and  governments around the world.  the “learning agent” would use RSS and other “news” inputs as feedback monitors to see how policy calls are going and further intuit rule sets for future policies…

With its (hypothetical) presence revealed, the least surprised people in the world are those Peoplenomics readers who have, over the years, become accustomed to “thinking the unthinkable.”

For everyone else our message is simple:

The technology to apply A.I. to econometric, social, and national “technical means” data has been around for 10-30 years.

The tech to learn – as in “imply rules of how the computation future will behave” has also been implicit at GMU’s L.A. work and presumably elsewhere for an equal amount of time.

With Trump buckling in now to the bulk of his term, we expect that the odds are at least 50% that some lash-up of A.I. is already “advising” the White House and Congress on which courses of action to take and where at least the ‘Detroit Barriers” of given policies and actions lay..

And what if the reason we do not have genuine progress on many of the Trump campaign promises is that they “don’t model well” for now?

Welcome to the “shared-ruling paradigm.”

We foresee many, many more Trump reversals to come as the U.S. is likely NOT the only country which is tinkering with advanced computer decision-support or outright direction of national policy.

At least two others (China and Russia) likely have the brainpower and technological horsepower to cobble up something that could give us a real run for our money. Who has how many humans in the loop?  Key question, that.  But you won’t find it on any U.N. Agenda.  All hush-hush.

As long as the lights are on, you may rest assured that computational parity or superiority by the U.S. is being maintained.  Once the EMP goes off or the grid goes down, then the learning agent software would likely infer only one outcome.  We would launch preemptively and then it’s an automated Finis.

When that happens, you won’t want to be within 100 miles of anything worth dropping a MIRV on.

Just thought you’d like to look behind the curtain.  Thinking the unthinkable stuff isn’t generally wheeled out on this site.  But we will have some futther speculations this weekend for subscribers.

Write when you get rich,