People are learning to trust an AI, to make decisions on their
behalf. This will change our world exponentially in the next 10 years.
Now that we have reached connectivity to the Net with 50% of the human connected population, the AI of the IoT will be a growing trust factor in our daily lives.
We are accelerating beyond the simple tools of trusting that the answers to our questions are correct from "Siri" or "Alexa." Accepting the trusted route from Google Maps on the most ideal navigation to our destination is already a given.
Beyond the consumer, the "Algo Bots" and Algorithmic Trading have already replaced the previous years of approximately 600 Goldman Sachs traders with 2 people, to oversee daily operations on the floor. There are others who have already predicted the replacement of other human operators in various public and private decision-making bodies.
So what?
Trust Decisions in the next decade will be augmented by "Artificial Intelligence" on a more frequent basis. That is already a given for many groups of decision makers across the globe. The question is, how will governments begin to regulate AI?
Who will be in charge of making sure that the code and the algorithmic activity is correct? That the rules behind the Trust Decisions are correct?
You see, as the software becomes more invasive in an individuals daily life and we rely on it for the truth, governments will be involved. They already are.
The "rules for composing the rules, that lead to millions of peoples trusted decisions is at stake. Maybe even more so, the evolution of "Quantum Law." For those thought leaders such as Jeffrey Ritter who have for years been so keen to articulate the emergence of the thought of governance of unstructured data, there is this:
So how might decision making bodies such as the U.S. National Security Council (NSC) utilize AI? Greg Lindsay and August Cole have already addressed this years ago with METIS:
"The result is a national security apparatus capable of operating at, as you like to say, “at the speed of thought”—which is still barely fast enough to keep up with today’s AI-enhanced threats. It required a wrenching shift from deliberative policymaking to massively predictive analysis by machines, with ultimate responsibility concentrated in your hands at the very top."
In 2019, begin thinking deeper and longer about your TrustDecisions...
Now that we have reached connectivity to the Net with 50% of the human connected population, the AI of the IoT will be a growing trust factor in our daily lives.
We are accelerating beyond the simple tools of trusting that the answers to our questions are correct from "Siri" or "Alexa." Accepting the trusted route from Google Maps on the most ideal navigation to our destination is already a given.
Beyond the consumer, the "Algo Bots" and Algorithmic Trading have already replaced the previous years of approximately 600 Goldman Sachs traders with 2 people, to oversee daily operations on the floor. There are others who have already predicted the replacement of other human operators in various public and private decision-making bodies.
So what?
Trust Decisions in the next decade will be augmented by "Artificial Intelligence" on a more frequent basis. That is already a given for many groups of decision makers across the globe. The question is, how will governments begin to regulate AI?
Who will be in charge of making sure that the code and the algorithmic activity is correct? That the rules behind the Trust Decisions are correct?
You see, as the software becomes more invasive in an individuals daily life and we rely on it for the truth, governments will be involved. They already are.
The "rules for composing the rules, that lead to millions of peoples trusted decisions is at stake. Maybe even more so, the evolution of "Quantum Law." For those thought leaders such as Jeffrey Ritter who have for years been so keen to articulate the emergence of the thought of governance of unstructured data, there is this:
"We are moving from a time in which we presume that all electronic information is true to a time in which we can affirmatively calculate what it is and know the rules by which it is governed on the fly," Ritter said. "That's quantum governance."You realize that the words will live on for eternity and for others to always contemplate. That is a given, that all of us shall be considering for our future, sooner than later.
So how might decision making bodies such as the U.S. National Security Council (NSC) utilize AI? Greg Lindsay and August Cole have already addressed this years ago with METIS:
"The result is a national security apparatus capable of operating at, as you like to say, “at the speed of thought”—which is still barely fast enough to keep up with today’s AI-enhanced threats. It required a wrenching shift from deliberative policymaking to massively predictive analysis by machines, with ultimate responsibility concentrated in your hands at the very top."
In 2019, begin thinking deeper and longer about your TrustDecisions...
No comments:
Post a Comment