EsoErik

Monday, June 24, 2019

 

Concept: The Artificial, Feeling, Cognizant Application

Just playing around with some fun concepts, here...

"Hard AI Kit" makes available to you the panoply of software components required to construct your own thinking, self-aware mind. Various example Mind Assemblies are included to give you somewhere to begin.

And where to begin? I suggest: by observing the behavior of the provided example Mind Assemblies within simulated universes. Hard AI Kit includes a universe simulator component named, naturally, Universal Simulator, where Mind Assemblies may dwell. We call a Mind Assembly that is dwelling in a universe a "Mind Assembly agent". An existing human being is called a "person", and an existing Mind Assembly being is called a "Mind Assembly agent". Try instantiating a simulated universe and spawning a Mind Assembly agent within it:

# HAIK US create --standard "myuniverse"
     Universe "myuniverse" created! Note: Universal Simulator daemon (HAIK.USD) is not running.
# sudo systemctl start HAIK.USD
# HAIK US status
     Universe "myuniverse", population 9.1Bn NPC, 0 user MA. Age 13.772Bn Y total, +3s from instantiation
# HAIK MA spawn "ExampleA" "myuniverse"
     Mind Assembly "ExampleA" agent spawned in "myuniverse" with randomly generated name "Jnn Xavier Smithson"
# HAIK US status
     Universe "myuniverse", population 9.1Bn NPC, 1 user MA. Age 13.772Bn Y total, +7s from template instantiation

At this point, time is passing in your universe, "myuniverse", and "Jnn Xavier Smithson" (or whatever your MA agent is named) is experiencing that time passing in "myuniverse". You might wonder that user agent is up to. What, exactly, is happening during this time which is passing in "myuniverse"?

Any mind is ultimately nothing more than a program and data, so the same is necessarily true for a Mind Assembly agent. Your human brain, the one in your head, is a mind and is therefore conceptually identical to a Mind Assembly agent in these essentials. However, Mind Assembly agent and human brain implementations differ. Whereas your human brain is a program implemented in neurons and the connections between them, with access to a pool of stored data (your memories) encoded in more neurons and connections between neurons, a Mind Assembly agent is x64 machine code and an sqlite3 database file.

Your mind receives input from your senses, and a Mind Assembly agent receives input from its senses. Your senses are eyes, ears, skin, and such. The senses of a Mind Assembly agent are a Mind Assembly's read function, which is executed once per simulation quanta ("tick") by the agent's universe. MA::read is unique in that only this function may access Universe::query.  You may wonder if a Mind Assembly agent that executes Universe::query calls arbitrarily from within its read function can build up a complete picture of its universe and therefore gain omnipotent knowledge, and the answer is "yes". The included example Mind Assemblies have read functions that query data only from the immediate simulated vicinity of their agent instance.

TODO: write

As you experiment with modifying the parameters of the included example Mind Assemblies (MA) and then running them in simulated universes, you may be tempted to modify the examples' read functions to query arbitrary data. Indeed, this is a worthy experiment, and if you do so, you will soon note that the value of the agents governed by the MA you are modifying rapidly changes to either 0.0 or 1.0.

R value is intended to represent rationality, and is assigned by Universe::grade_rationality at the conclusion of each tick. Universal Simulator, as configured by default, derives R from heuristic evaluation of MA agent behavior in terms of MA agent persistence, fecundity, and the persistence of those agents spawned by that MA agent. That is to say an MA agent is considered perfectly rational (1.0) while it continues to exist in a universe where it has at least one child that continues to exist. A perfectly irrational MA agent (0.0) has terminated not only its own existence, but that of its offspring.

So, if you provide a mind with total knowledge of its universe, it either likes it and decides to make itself immortal + have a big family, or... not. Why so cut and dried?

It is an ontological consequence of Gödel's theorem of completeness, writ large. To see why, create a universe with a special rule that makes MA::state inaccessible to Universe::query:

# HAIK US set private_agent_state=yes "myuniverse"

Note that an even an apparently omnipotent read implementation no longer guarantees rapid assumption of R=1.0 or R=0.0. The reason is simple: if agent cannot completely know itself, it cannot know whether it truly wants to exist.

That's interesting as far as MA beings go, but what does it say of us? Of we human beings? Certainly, even if we are omnipotent, one thing remains that we do not know: ourselves. Or, alternatively, it says that none of us are omnipotent.

Possibly to be continued...

Archives

July 2009   August 2009   September 2009   October 2009   November 2009   December 2009   January 2010   September 2010   December 2010   January 2011   February 2011   April 2011   June 2011   August 2011   February 2012   June 2012   July 2012   August 2012   October 2012   November 2012   January 2014   April 2014   June 2014   August 2014   September 2014   October 2014   January 2015   March 2015   April 2015   June 2015   November 2015   December 2015   January 2016   June 2016   August 2016   January 2017   March 2017   April 2018   April 2019   June 2019   January 2020  

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]