This commit is contained in:
LSaldyt
2017-12-04 12:03:35 -07:00
parent 322329b646
commit 2f3d934a20
2 changed files with 64 additions and 28 deletions

View File

@ -30,34 +30,34 @@
\section{Introduction}
This paper stems from Mitchell's (1993) and Hofstadter's \& FARG's (1995) work on the copycat program.
This project focuses on effectively simulating intelligent processes through increasingly distributed decision-making.
In the process of evaluating the distributed nature of copycat, this paper also proposes a "Normal Science" framework.
First, copycat uses a "Parallel Terraced Scan" as a humanistic inspired search algorithm.
The Parallel Terraced Scan corresponds to the psychologically-plausible behavior of briefly browsing, say, a book, and delving deeper whenever something sparks one's interest.
In a way, it is a mix between a depth-first and breadth-first search.
This type of behavior seems to very fluidly change the intensity of an activity based on local, contextual cues.
Previous FARG models use centralized structures, like the global temperature value, to control the behavior of the Parallel Terraced Scan.
This paper explores how to maintain the same behavior while distributing decision-making throughout the system.
Specifically, this paper attempts different refactors of the copycat architecture.
First, the probability adjustment formulas based on temperature are changed.
Then, we experiment with two methods for replacing temperature with a distributed metric.
Initially, temperature is removed destructively, essentially removing any lines of code that mention it, simply to see what effect it has.
Then, a surgical removal of temperature is attempted, leaving in tact affected structures or replacing them by effective distributed mechanisms.
To evaluate the distributed nature of copycat, this paper focuses on the creation of a `normal science' framework.
By `Normal science,' this paper means the term created by Thomas Kuhn--the collaborative enterprise of furthering understanding within a paradigm.
Today, "normal science" is simply not done on FARG architectures (and on most computational cognitive architectures too... see Addyman \& French 2012).
Unlike mathematical theories or experiments, which can be replicated by following the materials and methods, computational models generally have dozens of particularly tuned variables, undocumented procedures, multiple assumptions about the users computational environment, etc.
It then becomes close to impossible to reproduce a result, or to test some new idea scientifically.
This paper focuses on the introduction of statistical techniques, reduction of "magic numbers", improvement and documentation of formulas, and proposals for statistical human comparison.
We also discuss, in general, the nature of the brain as a distributed system.
While the removal of a single global variable may initially seem trivial, one must realize that copycat and other cognitive architectures have many central structures.
This paper explores the justification of these central structures in general.
Is it possible to model intelligence with them, or are they harmful?
%% This paper stems from Mitchell's (1993) and Hofstadter's \& FARG's (1995) work on the copycat program.
%% This project focuses on effectively simulating intelligent processes through increasingly distributed decision-making.
%% In the process of evaluating the distributed nature of copycat, this paper also proposes a "Normal Science" framework.
%%
%% First, copycat uses a "Parallel Terraced Scan" as a humanistic inspired search algorithm.
%% The Parallel Terraced Scan corresponds to the psychologically-plausible behavior of briefly browsing, say, a book, and delving deeper whenever something sparks one's interest.
%% In a way, it is a mix between a depth-first and breadth-first search.
%% This type of behavior seems to very fluidly change the intensity of an activity based on local, contextual cues.
%% Previous FARG models use centralized structures, like the global temperature value, to control the behavior of the Parallel Terraced Scan.
%% This paper explores how to maintain the same behavior while distributing decision-making throughout the system.
%%
%% Specifically, this paper attempts different refactors of the copycat architecture.
%% First, the probability adjustment formulas based on temperature are changed.
%% Then, we experiment with two methods for replacing temperature with a distributed metric.
%% Initially, temperature is removed destructively, essentially removing any lines of code that mention it, simply to see what effect it has.
%% Then, a surgical removal of temperature is attempted, leaving in tact affected structures or replacing them by effective distributed mechanisms.
%%
%% To evaluate the distributed nature of copycat, this paper focuses on the creation of a `normal science' framework.
%% By `Normal science,' this paper means the term created by Thomas Kuhn--the collaborative enterprise of furthering understanding within a paradigm.
%% Today, "normal science" is simply not done on FARG architectures (and on most computational cognitive architectures too... see Addyman \& French 2012).
%% Unlike mathematical theories or experiments, which can be replicated by following the materials and methods, computational models generally have dozens of particularly tuned variables, undocumented procedures, multiple assumptions about the users computational environment, etc.
%% It then becomes close to impossible to reproduce a result, or to test some new idea scientifically.
%% This paper focuses on the introduction of statistical techniques, reduction of "magic numbers", improvement and documentation of formulas, and proposals for statistical human comparison.
%%
%% We also discuss, in general, the nature of the brain as a distributed system.
%% While the removal of a single global variable may initially seem trivial, one must realize that copycat and other cognitive architectures have many central structures.
%% This paper explores the justification of these central structures in general.
%% Is it possible to model intelligence with them, or are they harmful?
%% {Von Neumann Discussion}
%% {Turing Completeness}
@ -69,7 +69,15 @@
\section{Methods}
\subsection{Formula Documentation}
Many of copycat's formulas use magic numbers and marginally documented formulas.
This is less of a problem in the original LISP code, and more of a problem in the twice-translated Python3 version of copycat.
However, even in copycat's LISP implementation, formulas have redundant parameters.
For example, if given two formulas: $f(x) = 2x$ and $g(x) = x^2$, a single formula can be written $h(x) = 2x^2$ (The composed and then simplified formula).
Ideally, the adjustment formulas within copycat could be reduced in the same way, so that much of copycat's behavior rested on a handful of parameters in a single location, as opposed to more than ten parameters scattered throughout the repository.
Also, often parameters in copycat have little statistically significant effect.
As will be discussed in the $\chi^2$ distribution testing section, any copycat formulas without a significant effect will be hard-removed.
\subsection{Testing the Effect of Temperature}
To begin with, the existing effect of the centralizing variable, temperature, will be analyzed.
\subsection{Temperature Probability Adjustment}
\subsection{Temperature Usage Adjustment}
\subsection{$\chi^2$ Distribution Testing}

28
papers/formulas/adj.l Normal file
View File

@ -0,0 +1,28 @@
(defun get-temperature-adjusted-probability (prob &aux low-prob-factor
result)
; This function is a filter: it inputs a value (from 0 to 100) and returns
; a probability (from 0 - 1) based on that value and the temperature. When
; the temperature is 0, the result is (/ value 100), but at higher
; temperatures, values below 50 get raised and values above 50 get lowered
; as a function of temperature.
; I think this whole formula could probably be simplified.
(setq result
(cond ((= prob 0) 0)
((<= prob .5)
(setq low-prob-factor (max 1 (truncate (abs (log prob 10)))))
(min (+ prob
(* (/ (- 10 (sqrt (fake-reciprocal *temperature*)))
100)
(- (expt 10 (- (1- low-prob-factor))) prob)))
.5))
((= prob .5) .5)
((> prob .5)
(max (- 1
(+ (- 1 prob)
(* (/ (- 10 (sqrt (fake-reciprocal *temperature*)))
100)
(- 1 (- 1 prob)))))
.5))))
result)