Compare commits

..

334 Commits

Author SHA1 Message Date
72d0bf3d3e Add comprehensive centrality analysis to slipnet study
Key finding: Eccentricity is the only metric significantly correlated
with conceptual depth (r=-0.380, p=0.029). Local centrality measures
(degree, betweenness, closeness) show no significant correlation.

New files:
- compute_centrality.py: Computes 8 graph metrics
- centrality_comparison.png: Visual comparison of all metrics
- Updated paper with full analysis

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 21:17:02 +00:00
50b6fbdc27 Add slipnet analysis: depth vs topology correlation study
Analysis shows no significant correlation between conceptual depth
and hop distance to letter nodes (r=0.281, p=0.113). Includes
Python scripts, visualizations, and LaTeX paper.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 20:58:15 +00:00
06a42cc746 Add CLAUDE.md and LaTeX paper, remove old papers directory
- Add CLAUDE.md with project guidance for Claude Code
- Add LaTeX/ with paper and figure generation scripts
- Remove papers/ directory (replaced by LaTeX/)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 19:14:01 +00:00
19e97d882f Merge master branch into main
Consolidating all project history from master into main branch.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-28 15:59:46 +00:00
4788ffbc05 Commit project files before pushing to Gitea 2025-10-06 17:19:27 +01:00
5593a109ab Update README.md 2025-04-23 21:39:29 +02:00
c80329fea0 Update README.md 2025-04-23 21:36:53 +02:00
6b6155b501 Update README.md 2025-04-23 21:34:16 +02:00
12c81be128 Update README.md 2025-04-23 21:27:58 +02:00
c766d446c5 Update README.md 2025-04-23 21:27:11 +02:00
50d6643bbb Update README.md 2025-04-23 21:25:30 +02:00
3b82892136 Update README.md 2025-04-23 21:23:11 +02:00
7d324e44e9 Update README.md 2025-04-23 21:20:58 +02:00
69a04a724b Update README.md 2025-04-23 21:20:05 +02:00
bdbb964d5d Update README.md 2025-04-23 21:19:07 +02:00
85650b7acb docs: update main README with system analysis 2025-04-23 20:56:19 +02:00
bae3a17d24 docs: add detailed codelet decorator explanation with comments 2025-04-23 19:42:05 +02:00
0c40e4b265 docs: add comprehensive README files for all components 2025-04-23 18:48:21 +02:00
599c4c497d Rename documentation files to source_README.md format 2025-04-23 17:31:00 +02:00
978383638b Documentation of source files (by LLM) 2025-04-23 17:28:30 +02:00
c05e3e832c Merge pull request #2 from mmcguill/python3-tk-nav-toolbar-fix
Fix to run with Python 3.8.1
2020-03-26 07:39:11 -04:00
ce1ac214dc Fix to run with Python 3.8.1 - NavigationToolbar2TkAgg is not available anymore 2020-01-23 10:40:52 +01:00
8e59c72c60 Merge pull request #1 from jalanb/patch-1
Add dot
2018-08-23 18:52:07 -03:00
5c24c082f5 Add dot
OMG! I've been quoted on the InterWebs.

Thank You for the awesome curation work here!

Looking forward to alanogising along your shelves.
2018-08-23 09:07:24 +01:00
550117cf15 Update README instructions 2018-06-20 07:58:07 -06:00
bcb7ba2707 Updates README with updated gui screenshot 2018-01-16 08:27:36 -07:00
e877060064 merge branch 'develop' 2018-01-12 15:36:12 -07:00
54732b1477 Merge branch 'revision-2.0' into develop 2018-01-12 15:33:51 -07:00
84cc3bf9c1 Adds probability difference to copycat/statistics 2018-01-12 15:32:53 -07:00
1506e673a2 Further cleans resulting tables 2018-01-12 15:09:13 -07:00
6215ccb08a Cleans table generation and transposes output 2018-01-12 14:30:35 -07:00
0a0369f5e1 Generalizes X^2 test to use G value 2018-01-12 14:19:11 -07:00
aed399ef34 Minor edits 2018-01-09 21:42:42 -07:00
5138126148 Reverts comparison script 2017-12-09 19:55:14 -07:00
58b0fd20eb Adds results section, additional sources: 2017-12-09 19:54:49 -07:00
6a826966a1 Spell checks draft.tex, adds sources 2017-12-09 18:44:25 -07:00
be921108ab Fixes sources + adds makefile 2017-12-09 18:08:44 -07:00
100eb11a99 Adds bibliography 2017-12-09 17:21:30 -07:00
28e1ddae30 Updates chi2 code for final comparison 2017-12-09 15:15:06 -07:00
a8b9675d2f Updates paper draft for final chi2 table 2017-12-09 15:14:53 -07:00
7eb7378ed3 Removes alpha/beta from weighted formula 2017-12-08 10:49:58 -07:00
08f332b936 Adds initial probability-adjustment-formula changes 2017-12-08 10:48:41 -07:00
81d2590fa6 Adds Theory section edits 2017-12-08 10:16:58 -07:00
9bf2980668 Merge branch 'develop' 2017-12-07 17:16:38 -07:00
0973e75813 Fixes issue with no gui 2017-12-07 17:15:20 -07:00
646cdbb484 Editing of draft.tex 2017-12-05 14:22:29 -07:00
cfee3303ba Remove some comments from draft 2017-12-04 19:36:30 -07:00
c513f66ed9 Adds revise notes to copycat draft introduction 2017-12-04 19:27:27 -07:00
4932bf6826 Adds no-fizzle distributions 2017-12-04 16:09:43 -07:00
3849632037 WIP 2017-12-04 15:40:05 -07:00
fe599c535e Adds no probability adjustment comparison 2017-12-04 13:55:53 -07:00
d61dbed3ba Adds no-adj distribution set 2017-12-04 13:48:13 -07:00
b4b1db5107 Draft progress 2017-12-04 13:39:16 -07:00
e05bf2ea19 Organizes paper drafts 2017-12-04 12:19:40 -07:00
f51379df1c Merge branch 'master' into revision-2.0 2017-12-04 12:17:19 -07:00
9c03520aeb Fixes cli/gui conflicts 2017-12-04 12:14:31 -07:00
25c0fbf3ab Merge branch 'develop' into revision-2.0 2017-12-04 12:11:32 -07:00
d1b8100c2c Merge branch 'feature-gui' into revision-2.0 2017-12-04 12:10:48 -07:00
47af738f7c Merge branch 'feature-temperature-effect-analysis' into revision-2.0 2017-12-04 12:09:51 -07:00
3d9079cfbd Merge branch 'legacy' into revision-2.0 2017-12-04 12:08:39 -07:00
93614db3dd Removes huge files 2017-12-04 12:06:34 -07:00
d605855bbc Merge branch 'paper' into revision-2.0 2017-12-04 12:06:12 -07:00
0242456138 Adds better cross-compare output 2017-12-04 12:05:07 -07:00
2f3d934a20 WIP 2017-12-04 12:03:35 -07:00
322329b646 Adds second draft structure 2017-12-04 11:46:00 -07:00
282a4307d4 Adds normal science section notes 2017-12-02 13:58:39 -07:00
bb3bdf251d Moves notes into paper structure 2017-12-02 13:36:48 -07:00
b6789b96f9 Enumerates a few of the pre-discussion points 2017-12-01 12:40:37 -07:00
4bd3983e71 Adds draft.tex, an organized paper draft to be.. 2017-12-01 12:35:37 -07:00
652511b420 Adds some final temperature-calculation notes 2017-12-01 12:23:38 -07:00
b58ec09722 Adds additional notes and plans to paper 2017-11-27 19:00:04 -07:00
95a10c902e Fixes typo 2017-11-20 18:26:30 -07:00
835c43199e Adds updated distribution and results 2017-11-19 14:30:58 -07:00
98c2b497a6 Fixes bugs with multi-formula distributions 2017-11-19 14:29:09 -07:00
6757d5b842 Adds cross-chi2 results 2017-11-19 11:39:22 -07:00
402e66409a Adds cross-chi2 comparison script 2017-11-19 11:25:11 -07:00
ee20de8297 Adds distributions from separate branches 2017-11-19 11:05:35 -07:00
bac0eb05c4 Updates distributions 2017-11-19 11:03:14 -07:00
06c5625f75 Updates distributions 2017-11-19 10:58:02 -07:00
be6d1fa495 Merge branch 'feature-normal-science-backport' into legacy 2017-11-18 18:44:24 -07:00
91dcbeefec Merge branch 'feature-normal-science-backport' into feature-temperature-effect-analysis 2017-11-18 18:44:10 -07:00
35ae49d5a4 Merge branch 'feature-normal-science-backport' into feature-normal-science-framework 2017-11-18 18:34:25 -07:00
ec9e0c333e Merge branch 'feature-normal-science-backport' into feature-gui 2017-11-18 18:32:13 -07:00
4388bede7d Removes old tests 2017-11-18 18:30:55 -07:00
9c47615c5a Fixes issues with variable initialization order 2017-11-18 18:30:23 -07:00
ee483851d8 Creates normal science backport 2017-11-18 18:25:24 -07:00
ff152c6398 WIP 2017-11-18 17:32:37 -07:00
89d26c1e8a Adds temperature history plotting 2017-11-17 11:17:15 -07:00
589b93bfc9 WIP 2017-11-17 09:58:28 -07:00
3f26d302c8 Adds gitignore 2017-11-17 09:40:28 -07:00
bd8bec2d37 WIP: Add cross-chi-2 tests 2017-11-16 08:45:11 -07:00
bdf47b634c WIP 2017-11-16 08:03:01 -07:00
d16e347f04 Adds problem class 2017-11-14 18:05:58 -07:00
57221b9c45 Adds random notes and plans 2017-11-14 17:28:43 -07:00
6951fd5de7 Adds meta and parameterized meta formulas, for fun 2017-11-14 17:16:02 -07:00
f6f5fffc78 Adds initial writeup for adjustment formulas 2017-11-14 17:08:23 -07:00
a3d693d457 Adds adjustment formulas and embeds them in paper 2017-11-13 10:49:15 -07:00
7a39d6b00d Fixes some silly wording in paper 2017-11-13 10:40:18 -07:00
ccf10b8a0c Merge branch 'feature-temperature-effect-analysis' into paper 2017-11-13 10:40:07 -07:00
20d754faa7 Changes to pbest only formula 2017-11-12 15:24:08 -07:00
906799e32d Adds old distribution data for testing against 2017-11-12 15:20:32 -07:00
45ca7ff912 Adds distribution saving and cl args 2017-11-12 14:52:46 -07:00
4ba37827e4 Adds free-form notes influenced by Von Neumann 2017-11-12 10:03:16 -07:00
1ac590ad06 Updates gitignore + WIP on paper 2017-11-07 21:18:24 -07:00
b391e2ca83 Separates paper 2017-11-06 20:48:48 -07:00
6fd1539924 Initial commit (move from openleaf) 2017-11-06 20:14:32 -07:00
782e38a50c Update README.md 2017-11-03 13:00:56 -07:00
d0c98247c6 Updates README with GUI screenshot 2017-11-03 12:59:48 -07:00
25841db648 Fixes graph updates 2017-11-03 12:56:44 -07:00
97c9b2eb57 Updates gitignore 2017-11-02 16:27:45 -07:00
6b02fe3ca0 WIP 2017-11-02 16:19:01 -07:00
ac99a2ad9a Merge branch 'feature-adjustment-formula' into feature-temperature-effect-analysis 2017-11-01 17:47:06 -07:00
d744f19986 Removes unneeded files 2017-11-01 11:15:31 -07:00
d6a073dfc8 Merge branch 'develop' 2017-11-01 11:14:24 -07:00
bb4f758a39 Merge branch 'feature-adjustment-formula' into develop 2017-11-01 11:09:42 -07:00
15c435c9f4 Merge branch 'feature-gui' into develop 2017-11-01 11:06:21 -07:00
9a5aca5b80 Adds parameterized and soft "best" formula alts 2017-11-01 11:01:40 -07:00
af72289ee5 Improves descriptor log 2017-11-01 10:41:39 -07:00
af1e2e042e Fixes bugs with pausing 2017-11-01 10:19:22 -07:00
20b25125d8 WIP 2017-10-26 19:02:39 -07:00
fe5110f00b Merge branch 'feature-gui' into develop 2017-10-26 12:55:17 -07:00
80b6e49581 WIP 2017-10-26 12:52:58 -07:00
2187511706 Adds plot saving 2017-10-26 11:00:08 -07:00
50bbb468b7 Separates style 2017-10-25 20:06:10 -07:00
8227072ebd Converts to ttk 2017-10-25 19:31:13 -07:00
8143259170 Adds interval-updated list widgets 2017-10-25 19:19:25 -07:00
cc35cb9de2 Adds play on go 2017-10-25 10:54:54 -07:00
2cf1320672 Creates a GUI specific runnable 2017-10-25 10:21:07 -07:00
3a6b2ac18f Adds logging and basic object view 2017-10-24 13:29:57 -07:00
433067a045 Fixes some displays 2017-10-22 13:15:26 -07:00
dcf5b252c3 Adds entry method 2017-10-22 13:05:28 -07:00
354470bcd7 Adds live temperature and answer graphs 2017-10-21 15:24:53 -07:00
0a8a9a8f23 Adds GridFrame and Step button 2017-10-21 14:36:04 -07:00
aa218988fd Adds play/pause button 2017-10-21 14:12:42 -07:00
0d972a6776 Removes log 2017-10-21 14:12:33 -07:00
95eb1a0b97 Updates to dark theme 2017-10-21 13:38:38 -07:00
300a0a157a WIP 2017-10-21 12:57:30 -07:00
f51525450d Creates primary gui frame class 2017-10-21 12:53:23 -07:00
9d06021f5d WIP unify GUI 2017-10-21 12:32:35 -07:00
71c7b34f63 GUI Progress: Inline animated graphs 2017-10-21 10:41:46 -07:00
aafb0de433 Makes GUI stretchy 2017-10-20 11:42:22 -07:00
397d49cc58 Random scattered analysis 2017-10-19 20:37:40 -07:00
ebee40323c WIP 2017-10-18 10:29:03 -07:00
176e6cd4e2 Adds final formula comparison code 2017-10-16 18:18:55 -07:00
a9a7a2c504 Merge branch 'develop' 2017-10-16 13:43:30 -07:00
268e00998a Merge branch 'feature-adjustment-formula' into develop 2017-10-16 13:43:01 -07:00
1835367ea9 Updates tests: All tests pass. 2017-10-16 13:41:47 -07:00
765323c3cd Updates adjustment formula to a satisfactory formula
Out of all of the formulas I've tried, I believe this formula to be a more elegant alternative to Mitchell's formula.
However, like Mitchell's original formula, it still provides weird answers in some cases, but I believe these to source from elsewhere,
not from the adjustment formula.

For more information, please see:

https://docs.google.com/spreadsheets/d/1JT2yCBUAsFzMcbKsQUcH1DhcBbuWDKTgPvUwD9EqyTY/edit?usp=sharing
wq
2017-10-16 13:21:19 -07:00
6985dedb18 Adds additional adjustment formulas 2017-10-15 14:38:48 -06:00
dd0e7d8f37 Adds codelet list display 2017-10-13 15:24:47 -06:00
fffc4863f2 Adds slipnode list 2017-10-13 15:15:45 -06:00
5d82c5186e Adds main canvas and improved temp reading 2017-10-13 14:53:36 -06:00
23def40750 Fixes main display size 2017-10-12 18:00:07 -06:00
0ba421029c Adds additional formulas, weights, and tests 2017-10-11 22:12:53 -06:00
67fdcc70e7 WIP 2017-10-11 19:58:03 -06:00
ef0e177bad WIP 2017-10-11 19:50:33 -06:00
81f2ef2473 Merge branch 'feature-adjustment-formula' into develop 2017-10-09 17:55:02 -06:00
f3386b91be Adds automated chi^2 testing 2017-10-09 17:54:26 -06:00
73558f378c Reverts to original adjustment formula for tests 2017-10-09 14:18:27 -06:00
b7c073d16b Modifies tests for inverse adjustment formula 2017-10-09 14:06:41 -06:00
6ff43f8d5a Updates tests 2017-10-09 14:01:17 -06:00
ca38b16e72 Updates travis testing installation step 2017-10-09 13:08:35 -06:00
81984d24e2 Restricts testing to develop and master 2017-10-09 13:03:12 -06:00
08ea0b2e10 Merge branch 'feature-temperature-improvements' into develop 2017-10-09 13:01:42 -06:00
f3e1c875de Adds travis testing skeleton 2017-10-09 13:01:18 -06:00
be06e22f64 Moves tests 2017-10-09 13:01:11 -06:00
73d0721286 Moves log location 2017-10-09 12:53:54 -06:00
27a55668be Experiments with alt inverse:
Equal probabilities for all items when temperature is equal to 100
2017-10-09 12:20:45 -06:00
3bf417e38a WIP 2017-10-09 11:06:16 -06:00
874683bf18 Adds clarification to breaker codelet docs 2017-10-07 23:38:48 -06:00
96c7c6e08c Cleans code, moving formula choice to copycat.py 2017-10-05 15:17:39 -06:00
c2c5d24f0d TAG: Formula testing code 2017-10-05 15:03:58 -06:00
8203cebb15 Calculate Chi^2 values for getAdj- formulas 2017-10-04 15:37:22 -06:00
b90bae2584 Adds automatic running, formula tests 2017-10-04 15:20:59 -06:00
7abb40f849 Adds problems and result saving 2017-10-04 15:20:48 -06:00
430e2b3750 Adds base temp formula 2017-09-29 17:55:24 -06:00
6b1c4634fe Fixes plot labels (slightly) 2017-09-29 16:59:26 -06:00
fa8b493948 Adds annotations and formula notes 2017-09-29 15:01:57 -06:00
665bf1f778 Adds notes to temperature.py 2017-09-29 13:47:03 -06:00
60a5274066 Adds additional plotting options 2017-09-29 13:34:26 -06:00
6142033631 Stop tracking copycat log
The log should still appear locally.
2017-09-29 13:18:03 -06:00
1c570735f8 Add simple matplotlib bar plots per run
As well as a flag to turn plotting on
2017-09-29 13:16:25 -06:00
42a875a492 Minor annotations to temperature calculations 2017-09-29 13:12:16 -06:00
c90dbd91e7 WIP 2017-09-28 15:44:41 -06:00
6d42f2c1a4 Changes default window size to 1200x800 2017-09-28 15:37:09 -06:00
33e2eb980d Fixes slipnode display 2017-09-28 15:35:15 -06:00
cd3ad65ff8 Documents usages of temperature 2017-09-28 15:04:42 -06:00
70494daf2c WIP gui changes 2017-09-28 10:53:37 -06:00
1b84b22e3f seems like the bug is in the sameness group or something very close to it 2017-09-28 01:18:13 -03:00
1cc18e75bd something is rotten somewhere 2017-09-28 00:54:07 -03:00
29b5987c4f ...and bingo! 2017-09-28 00:46:19 -03:00
9781e3ceed addtl testing... 2017-09-28 00:42:30 -03:00
4b1518a1af xyz? 2017-09-27 23:11:56 -03:00
3c8b21140d Experiments to refer to Lucas 2017-09-27 22:37:38 -03:00
75df81c8fd Experiments for Lucas email.
Merge branch 'master' of https://github.com/Alex-Linhares/co.py.cat
2017-09-27 22:33:54 -03:00
5605417e31 Preparing for refactor 2017-09-27 22:33:20 -03:00
b547306376 Preparing for refactor 2017-09-27 20:48:27 -03:00
a0048d16b5 Delete copycat.log 2017-09-27 20:34:31 -03:00
02558289ad Preparing for refactor 2017-09-27 20:32:54 -03:00
a564e43dff Preparing for refactor 2017-09-27 20:00:02 -03:00
120aa3a293 Merge branch 'master' of https://github.com/Alex-Linhares/co.py.cat 2017-09-27 16:02:39 -03:00
51e4ba64e2 Created simple jupyter notebook 2017-09-27 16:02:34 -03:00
ae24034288 WIP add gui elements 2017-09-27 12:30:42 -06:00
9a2a30ea4c Adds very simple gui to copycat 2017-09-27 11:38:32 -06:00
4348554fa7 Add simple matplotlib bar plots per run 2017-09-26 21:16:20 -06:00
27bbc6118e Preparing for refactor... 2017-09-26 22:47:09 -03:00
7ff0488861 Merge branch 'master' of https://github.com/Alex-Linhares/co.py.cat 2017-09-25 22:33:10 -03:00
0905d35680 start work om distributed decision making 2017-09-25 22:32:57 -03:00
36a1a31fe2 Update README.md 2017-09-07 19:26:10 -03:00
0a54c0ee83 Update README.md 2017-09-06 16:06:58 -03:00
729f6ec30c Merge branch 'master' of https://github.com/Alex-Linhares/co.py.cat 2017-08-28 00:02:46 -03:00
b5e35a35dd Found entry points for the research project 2017-08-28 00:02:34 -03:00
8611e415de Update README.md 2017-08-27 23:33:52 -03:00
67c87c7fde Update README.md 2017-08-27 23:28:41 -03:00
cc58c8d50a Merge pull request #1 from LSaldyt/master
Ports to Python3
2017-08-27 12:37:36 -03:00
2cdb9bbb36 Update README.md 2017-08-27 12:30:12 -03:00
197bbd361e Update README.md 2017-08-27 12:29:47 -03:00
bc848e8f2d Ports to Python3 2017-07-31 17:08:26 -06:00
318d0e2349 Fix a lot of crashes with empty or single-letter inputs. 2017-05-03 02:01:57 -07:00
2a48245c15 Add "frames per second" to the CursesReporter.
You can now set the FPS goal with `--fps=10` (or whatever) on the command line;
and the current (measured) FPS is displayed in the lower right corner.

During the run, you can bump the FPS goal up and down with `F` and `f` respectively!
2017-05-02 18:37:40 -07:00
0eec6a5259 Massively improve CursesReporter.
The Slipnet itself turns out to be boring to look at.
More interest is found in the Workspace structures, such as bonds,
groups, and correspondences.

The old behavior of `curses_main.py` is still accessible via

    python curses_main.py abc abd xyz --focus-on-slipnet
2017-05-02 18:01:46 -07:00
ef4a9c56c5 Try to fix up breakGroup.
With the new CursesReporter, I'm able to observe groups getting built
and broken; and I observed that sometimes a Bond (between a Letter and
a Group) would apparently survive the Group's breaking.
Reorder the operations in `breakGroup` so that the higher-level ones
("detach this Group from its external bonds") come strictly before
the lower-level ones ("ungroup this Group's members and remove this
Group from the Workspace, thus destroying it").

However, the "buggy" behavior I observed turned out to be due to a bug
in my display code and not due to anything wrong with `breakGroup`.
I suspect this patch is actually purely cosmetic.
2017-05-02 17:46:25 -07:00
730239f464 Rip out dead Bond.destinationIsOnRight and Bond.bidirectional. NFC. 2017-05-02 12:37:15 -07:00
5793fb887c Rip out dead method morePossibleDescriptions. NFC.
This code is already present in `getPossibleDescriptions`... which is
also a terrible function from the philosophical point of view, because
it secretly encodes knowledge about every predicate known to the system.
2017-05-02 11:33:43 -07:00
864c28609c Smartness update! A single letter is both "leftmost" and "rightmost".
Before this change, Copycat was unable to formulate more than the empty rule for
    abc : abd :: f : f
    abc : dbc :: f : f
    abc : aac :: f : f
After this change, Copycat strongly prefers
    abc : abd :: f : g  ("Replace the rightmost letter with its successor")
    abc : dbc :: f : d  ("Replace the leftmost letter with d")
    abc : aac :: f : e  ("Replace the middle letter with its predecessor")
2017-05-02 11:17:23 -07:00
ecc2c2e407 Add report_workspace() to Reporter, and remove dead rules from the workspace.
I think the change to `workspace.breakRule()` is harmless. In theory, it
should make Copycat less hesitant to come up with rules that conflict with
the already-broken rule.
2017-05-01 15:28:38 -07:00
25d73785de Further Pythonicity. NFC. 2017-05-01 13:07:19 -07:00
ceaf640147 Remove some more logging cruft. NFC. 2017-04-30 15:26:19 -07:00
c4e30f7399 Make possibleGroupBonds into a member function of Bond. NFC. 2017-04-30 15:18:19 -07:00
7947e955d7 More Pythonicisms. NFC. 2017-04-30 14:45:20 -07:00
ddfb34973d Rip out unused coderack.postings and coderack.runCodelets. NFC. 2017-04-30 10:38:42 -07:00
f9fc255151 Refactor coderack.probabilityOfPosting. NFC. 2017-04-30 10:27:55 -07:00
48c45e4b0a Fix more flake8 cruft; remove a bunch of logging. 2017-04-29 15:55:54 -07:00
c9bc26e03d Minor Pythonicisms. NFC. 2017-04-29 14:29:43 -07:00
11e9571ee0 Oops, add Reporter to the list of exported names. 2017-04-23 01:36:40 -07:00
34157be1f9 Shorten the setup.py for the copycat module. NFC. 2017-04-22 22:46:22 -07:00
9a2a1d6010 Add the Slipnet to the curses reporter.
This isn't terribly useful to the human observer, actually.
It seems like the most useful factors that ought to be displayed
are really the groups/bonds in the workspace and the current
"rule" (if any). Particularly, with the current design of Copycat,
it seems like the "rule" should be part of the displayed output
just the same as the modified target string.
2017-04-22 19:00:57 -07:00
16aae98c59 Fix a bunch of flake8 spam. NFC. 2017-04-22 18:41:48 -07:00
ec2c172ce0 Rip out some unused members of Slipnode. NFC. 2017-04-22 18:37:55 -07:00
b5b04c77a1 Remove a redundant "opposite" link from the slipnet.
This does change the micro behavior of Copycat. I would hope it doesn't
change the macro behavior, or at least changes it for the better.
2017-04-22 17:59:32 -07:00
e3e6b051d3 Whitespace. NFC. 2017-04-22 17:56:46 -07:00
3de933dbfa Redo all the argument parsing with argparse. 2017-04-22 17:53:06 -07:00
192ec2f106 Clean up some overly Java-ish base class stuff. NFC. 2017-04-18 23:44:38 -07:00
f2ffac4e66 Add a curses front-end. This is looking good now!
And clean up some logic in `rule.py`. This is the place where the
"brains" of Copycat really live, it seems; Copycat can only succeed
at solving a puzzle if it can take the `Rule` it deduced and apply
it to the target string to produce a new string. And it can only
do that if the necessary *actions* have been programmed into `rule.py`.
Right now, it explicitly can't deal with "rules" that involve more
than one local change; that involve reversal; or more importantly,
IIUC, rules that involve "ascending runs", because the idea of a
successor-group is(?) known to the Slipnet but not to `rule.py`;
the latter deals only in "strings", not in "workspace objects".
This seems like a major flaw in the system... but maybe I'm missing
something.
2017-04-18 23:18:26 -07:00
9f8bc8e66e Remove all print statements from the Copycat library.
Convert the most important one to logging; kill the rest.
2017-04-18 20:55:56 -07:00
65124fa45d Add a "setup.py" for pip-installing from GitHub.
You can now install Copycat into your Python virtualenv without even
checking out this repository! Just run this command:

    pip install -e git+git://github.com/Quuxplusone/co.py.cat.git#egg=copycat

To check out a specific branch,

    pip install -e git+git://github.com/Quuxplusone/co.py.cat.git@branch#egg=copycat
2017-04-18 18:22:32 -07:00
a3b977846e git mv context.py -> copycat.py; and start work on a "reporter" API.
The goal here is to use `curses` to display the coderack, slipnet,
and temperature in real time. A further goal would be a reporter
that sent the data over websockets to a browser, at which point
I could throw this thing up on Heroku and let people mess with it.
(Not that that would be very entertaining, yet.)
2017-04-18 01:59:51 -07:00
189bce2aa2 Remove one not-very-useful logging line. NFC. 2017-04-18 01:31:39 -07:00
db7dc21f83 Fix a crash on main.py aa b zz.
The "leftmost object" in the string `b` does span the whole string,
but it's not a `Group`, so the old code would crash when it got
to the evaluation of `group.objectList` (since `Letter`s don't have
`objectList`s).
2017-04-18 01:12:27 -07:00
fd74290d39 Clean up the handling of codelet arguments. NFC.
Just make all argument-passing explicit; which means the coderack
no longer cares about `oldCodelet` (which was being used only to
get the implicit arguments to the new codelet).
2017-04-18 01:12:27 -07:00
f08c57fac3 Fix some flake8 spam. NFC. 2017-04-18 01:12:27 -07:00
7388eaec54 Teach Context to be self-sufficient. NFC.
You can now create and run a Copycat instance by just saying
`Context().run('abc', 'abd', 'efg', 100)`!
2017-04-18 01:12:27 -07:00
12283b0836 Move some harmless imports to file scope. NFC. 2017-04-18 01:12:27 -07:00
30f8c623e5 Demagic workspaceFormulas.py. NFC. 2017-04-18 01:12:26 -07:00
3732ae8475 Major overhaul of "randomness" throughout.
- Nobody does `import random` anymore.

- Random numbers are gotten from `ctx.random`, which is an object
of type `Randomness` with all the convenience methods that used to
be obnoxious functions in the `formulas` module.

- Every place that was using `random.random()` to implement the
equivalent of Python3 `random.choices(seq, weights)` has been updated
to use `ctx.random.weighted_choice(seq, weights)`.

This has a functional effect, since the details of random number
generation have changed. The *statistical* effect should be small.
I do observe that Copycat is having trouble inventing the "mrrjjjj"
solution right now (even in 1000 test runs), so maybe something is
slightly broken.
2017-04-18 01:12:26 -07:00
8fdb9d06e6 Demagic everything in formulas.py. NFC.
Only one file left to go!
2017-04-18 01:12:26 -07:00
6165f77d3c Move a couple single-use helpers from formulas to codeletMethods. NFC. 2017-04-18 01:12:26 -07:00
ff389bd653 Move temperatureAdjustedFoo into the Temperature class. NFC.
And demagic all the callers of this function. Notice that with this
move, it becomes *harder* for these "getAdjustedFoo" functions to
access other contextual state, such as the state of the coderack
and the state of the workspace. This is a good thing for modularity
but possibly a misfeature in terms of flexibility-re-logic-changes.
2017-04-18 01:12:26 -07:00
99dc05f829 Demagic everything except the formulas and workspaceFormulas. NFC. 2017-04-18 01:12:26 -07:00
7581a328f7 Give WorkspaceString a self.ctx. Demagic all WorkspaceObjects. NFC. 2017-04-18 01:12:26 -07:00
bd4790a3f1 Kill all the globals (except context)! NFC. 2017-04-18 01:12:25 -07:00
22b15c3866 Demagic all the WorkspaceStructure children who aren't WorkspaceObjects. NFC. 2017-04-18 01:12:25 -07:00
b16666e4d7 Demagic WorkspaceStructure. NFC. 2017-04-18 01:12:25 -07:00
482c374886 Give every WorkspaceStructure a self.ctx member variable.
...which is currently initialized "by magic"; but removing that magic
will be the next step.
2017-04-18 01:12:25 -07:00
25ba9bfe93 (Almost) contextualize all the things! NFC.
The only top-level imports now are needed for inheritance relationships.

The only function-level imports are HACKS that I need to FIXME; they
all `from context import context as ctx` and then fetch whatever they
actually need from the `ctx` just as if `ctx` had been passed in by the
caller instead of fetched from this magical global storage.
2017-04-18 01:12:25 -07:00
3096c49fb9 This is working! 2017-04-18 01:12:24 -07:00
e6cbb347de testing 2017-04-18 01:12:24 -07:00
965bd13298 Disentangle another reference to slipnet. 2017-04-18 01:12:24 -07:00
6871d7a86c Disentangle one reference to slipnet. 2017-04-18 01:12:24 -07:00
cc288161a4 Major overhaul of temperature logic. Behavioral change.
I think the reason the temperature logic was so confused in the old code
is because the Java code has a class `Temperature` that is used for
graphical display *and* two variables in `formulas` that are used for
most of the actual math. But somewhere along the line, some of the code
in `formulas.java` started reading from `Temperature.value` as well.
So the Python code was just faithfully copying that confusion.

The actual abstraction here is a very simple "temperature" object
with a stored value. It can be "clamped" to 100.0 for a given period.
The only complication is that one of the codelets (the rule-transformer
codelet) wants to get access to the "actual value" of the temperature
even when it is clamped.

The Python rule-transformer codelet also had a bug: it was accidentally
setting `temperature.value` on the `temperature` module instead of on
the `temperature.temperature` object! This turned some of its behavior
into a no-op, for whatever that's worth.

Lastly, the calculation of `finalTemperature` in the main program can
now report 100.0 if the answer is found while the temperature is clamped.
I don't fully understand why this didn't happen in the old code.
I've hacked around it with `temperature.last_unclamped_value` for now,
but I should TODO FIXME.
2017-04-18 01:12:24 -07:00
6a56fdd898 Bikeshed some time-related names. 2017-04-18 01:12:23 -07:00
e5d44ae75c Bah! Remove CoderackPressures as it's not hooked up to anything. 2017-04-18 01:12:23 -07:00
44e5a8c59f Decouple Coderack from Slipnet. 2017-04-18 01:12:23 -07:00
e17dc2aa45 Untangle some initialization code. Assert invariants. NFC. 2017-04-18 01:12:23 -07:00
fa2142efaa Replace the coderack->workspaceFormulas coupling with coderack->workspace.
This feels slightly less tangled. Still needs work.
2017-04-18 01:12:23 -07:00
63b3fd4999 Decouple Slipnode from the global slipnet. 2017-04-18 01:12:23 -07:00
10f65fcf55 Inline the constant slipnet.timeStepLength. NFC. 2017-04-18 01:12:23 -07:00
d2436601ba Decouple coderack: remove global variable coderack.
Or at least, hide it upstairs in `copycat.py`.
`copycat.py` will eventually become a class, I'm guessing,
but let's pull everything into it first.
2017-04-18 01:12:22 -07:00
f2e28c0e19 Clean some dead code in __calculateIntraStringHappiness.
Indeed, it's dead in the Java version too.
2017-04-18 01:12:22 -07:00
ae0434d910 codeletMethods.py: Replace some random.random()s with coinFlip().
There is a bugfix in here as well: one of the probabilities was
being taken the wrong way around. The result should have been to
make single-letter groups very often, I guess? Fixing this bug
doesn't show any change in Copycat's macro behavior, except that
it seems like average temperatures have gotten hotter.
2017-04-18 01:12:22 -07:00
a3b122b75c Massive overhaul of "codelet methods" and the coderack.
Should be no functional change, but this gets rid of one circular
import (some codelet methods need a pointer to the coderack, but
they should be getting that pointer from their caller, not from
the global scope) and a lot of reflection-magic.
2017-04-18 01:12:22 -07:00
f37b88d032 random.seed(42) for testing; TODO revert me 2017-04-18 01:12:22 -07:00
3d630ba389 Decouple temperature from coderack. 2017-04-18 01:12:21 -07:00
51178c049d Inline trivial function Bond.get_source(). NFC. 2017-04-18 01:12:21 -07:00
a41b639487 Remove global variable coderackPressures (bugfix?)
Before this patch, `coderackPressures.updatePressures()` was always
a no-op, as evidenced by the until-now-harmless misspelling of Python's
list `remove` operation as `removeElement`.

I can't tell if this broke anything; my tests still pass.
2017-04-18 01:12:21 -07:00
5423d078e8 Move updateTemperature() from workspaceFormulas to workspace.
And remove dead and/or logging code to simplify the logic.
2017-04-18 01:12:21 -07:00
8171b68cbe Remove some unused global variables. NFC. 2017-04-18 01:12:20 -07:00
6fcf2f3350 git rm grouprun.py 2017-04-18 01:12:20 -07:00
d60ba5277c Remove a crash-causing line in slipnet.py.
Without this patch, `python main.py abc aabbcc milk` will reliably crash.
I believe what happens here is that we initialize all the slipnodes and
everything, and then `slipnet.predecessor` becomes `None`, which means
that if that concept ever arises on its own (vs. arising as the "opposite"
of "successor"), we'll be passing around `None` instead of a proper `Slipnode`
and everything goes sideways.

This line doesn't correspond obviously to anything in the Java code,
so I think it's just bogus --- an experiment in "brain damage" that was
accidentally committed?
2017-04-18 01:07:51 -07:00
0ff9d49111 Further Pythonicity and flake8 cleanup. NFC. 2017-04-16 18:22:57 -07:00
31323cd2bc Add some end-to-end tests!
These tests are intended to protect against regressions in the high-level
behavior of Copycat. They're not super precise, and they're VERY slow.
2017-04-16 18:22:57 -07:00
8e10814802 Further Pythonicity; and remove a bunch of logging from the inner loop. 2017-04-16 01:19:36 -07:00
77bfaaf5a7 Further refactor the main harness. Print average time for each solution. 2017-04-16 00:55:18 -07:00
3103f54ada Untie some loopy logic in addCodelet. (Functional change!) 2017-04-15 23:08:12 -07:00
e094160dcd More Pythonic cleanups. NFC. 2017-04-15 23:07:28 -07:00
a2260cdaf6 Run multiple iterations. Print final temperatures. Reduce stdout spew.
This makes the output of the program more closely resemble that of the
original Copycat described in "FCCA" page 236:

> [T]he average final temperature of an answer can be thought of as
> the program's own assessment of that answer's quality, with lower
> temperatures meaning higher quality.

For example, running `python main.py abc abd ijk 100` produced the
following output:

    ijl: 98 (avg temp 16.0)
    jjk: 1 (avg temp 56.3)
    ijk: 1 (avg temp 57.9)

And for `python main.py abc abd ijkk 100`:

    ijkkk: 2 (avg temp 19.8)
    ijkl: 51 (avg temp 28.1)
    ijll: 46 (avg temp 28.9)
    djkk: 1 (avg temp 77.4)
2017-04-15 22:29:46 -07:00
ed1d95896e More Pythonic idioms in coderackPressure.py.
No functional change.
2017-04-15 22:29:46 -07:00
88ee2ddd8d Spelling: neighbour -> neighbor.
The old code mixed both spellings; we might as well be consistent.
2017-04-15 22:29:46 -07:00
5735888d02 Minor Pythonicity cleanups.
No functional change.
2017-04-14 11:37:43 -07:00
69f75c3f42 Spelling: slipability -> slippability
No functional change.
2017-04-14 11:19:25 -07:00
bcfd8a5a10 Ignore mccabe complexity smells 2015-10-28 01:34:16 +00:00
c46e3b6db0 Allow more complex functions in Landscape 2015-10-28 01:29:43 +00:00
52402b99b3 Add Landscape configuration 2015-10-28 01:25:54 +00:00
aeb8cda755 Tidy references (which were broken by daeff3d) #5 2015-06-01 10:52:20 +01:00
daeff3d9bf Pylint the code 2014-12-22 23:44:09 +00:00
a5930b486c PEP 8 - line length 2014-12-22 20:18:54 +00:00
39fb7fc9b7 outdent 2014-12-22 16:56:53 +00:00
d4bb38b858 Github calls it sh, not shell 2014-12-22 16:56:16 +00:00
98357913e9 Make a separate para of final instruction 2014-12-22 16:53:02 +00:00
0f51434191 Better linkage #4 2014-12-22 16:50:15 +00:00
c38102c02e Extend readme to explain install & run, #4 2014-12-22 16:44:51 +00:00
94a0ecae48 PEP 008, mostly lines too long 2014-12-22 16:38:10 +00:00
c0971ce029 Link to a license file actually breaks license
Namely "copyright notice and this permission notice shall be included in all copies"
2014-03-21 12:25:58 +00:00
e58e449be2 Consistent licencing across projects 2013-08-16 10:24:41 +01:00
331114ebc3 Merge pull request #3 from jtauber/master
improved PEP compliance and fixed errors preventing it from running
2012-12-10 07:59:00 -08:00
8332b1387e fixed indentation problem 2012-12-01 02:15:25 -05:00
ab27b745be fixed missing random imports 2012-12-01 02:13:31 -05:00
b939f3ec3f fixed conceptual_depth for conceptualDepth 2012-12-01 02:12:22 -05:00
2281870cf2 removed unnecessary utils 2012-12-01 02:10:33 -05:00
33cf41b585 fix linter errors and warnings 2012-12-01 02:00:03 -05:00
cfaebd150f tabs to spaces 2012-11-30 02:12:44 -05:00
1ca7f1839f proper nouns don't take articles 2012-11-30 02:03:59 -05:00
53149013cc avoid duplicitous wording 2012-11-20 21:54:15 +00:00
b7b2a738b0 Ignore filesystem and editor files 2012-11-20 21:53:54 +00:00
86f0bf8016 Spell my own name correctly! 2012-10-26 18:25:24 +01:00
b3d46f3a68 Don't ignore unused files 2012-10-26 18:22:34 +01:00
feae97a988 Simpler returns 2012-10-26 18:20:26 +01:00
331b1ad3eb Separate out the main method 2012-10-26 18:20:15 +01:00
073f4fe05c I like to think Mr Hofstadter would appreciate the self-reference 2012-10-26 17:54:13 +01:00
47c0b457b3 Add license 2012-10-26 17:50:00 +01:00
b12ae322eb Make a package from the python scripts 2012-10-26 17:40:20 +01:00
d58dca3309 That's "Hofstadter" to me 2012-10-26 17:38:37 +01:00
5462c033ab Initial addition of Python scripts 2012-10-26 17:35:08 +01:00
90eb4a7b2a Initial commit 2012-10-26 09:16:21 -07:00
159 changed files with 17645 additions and 167 deletions

View File

@ -0,0 +1,23 @@
{
"permissions": {
"allow": [
"Bash(git push:*)",
"Bash(/c/Users/alexa/anaconda3/python.exe export_slipnet.py:*)",
"Bash(C:\\\\Users\\\\alexa\\\\anaconda3\\\\python.exe:*)",
"Bash(/c/Users/alexa/anaconda3/python.exe compute_letter_paths.py)",
"Bash(/c/Users/alexa/anaconda3/python.exe:*)",
"WebFetch(domain:github.com)",
"WebFetch(domain:raw.githubusercontent.com)",
"Bash(dir \"C:\\\\Users\\\\alexa\\\\copycat\\\\slipnet_analysis\" /b)",
"Bash(C:Usersalexaanaconda3python.exe plot_depth_distance_correlation.py)",
"Bash(powershell.exe -Command \"cd ''C:\\\\Users\\\\alexa\\\\copycat\\\\slipnet_analysis''; & ''C:\\\\Users\\\\alexa\\\\anaconda3\\\\python.exe'' compute_stats.py\")",
"Bash(powershell.exe -Command \"cd ''C:\\\\Users\\\\alexa\\\\copycat\\\\slipnet_analysis''; & ''C:\\\\Users\\\\alexa\\\\anaconda3\\\\python.exe'' plot_depth_distance_correlation.py\")",
"Bash(powershell.exe -Command \"cd ''C:\\\\Users\\\\alexa\\\\copycat\\\\slipnet_analysis''; pdflatex -interaction=nonstopmode slipnet_depth_analysis.tex 2>&1 | Select-Object -Last 30\")",
"Bash(powershell.exe -Command \"cd ''C:\\\\Users\\\\alexa\\\\copycat\\\\slipnet_analysis''; pdflatex -interaction=nonstopmode slipnet_depth_analysis.tex 2>&1 | Select-Object -Last 10\")",
"Bash(powershell.exe -Command \"cd ''C:\\\\Users\\\\alexa\\\\copycat\\\\slipnet_analysis''; pdflatex -interaction=nonstopmode slipnet_depth_analysis.tex 2>&1 | Select-Object -Last 5\")",
"Bash(powershell.exe -Command \"Get-ChildItem ''C:\\\\Users\\\\alexa\\\\copycat\\\\slipnet_analysis'' | Select-Object Name, Length, LastWriteTime | Format-Table -AutoSize\")",
"Bash(powershell.exe:*)",
"Bash(git add:*)"
]
}
}

197
.gitignore vendored
View File

@ -1,176 +1,43 @@
# ---> Python
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
*.py[co]
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
# Packages
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
.tox
.log
copycat.log
# Translations
*.mo
*.pot
# Other filesystems
.svn
.DS_Store
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
#uv.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Ruff stuff:
.ruff_cache/
# PyPI configuration file
.pypirc
# Editors
.*.swp
# Output
output/*
<<<<<<< HEAD
copycat.log
papers/*.log
papers/*.pdf
papers/*.out
papers/*.aux
papers/words
*.txt
=======
>>>>>>> develop

View File

@ -0,0 +1,6 @@
{
"cells": [],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 2
}

10
.landscape.yaml Normal file
View File

@ -0,0 +1,10 @@
pylint:
options:
dummy-variables-rgx: _
max-branchs: 30
pyflakes:
run: false
mccabe:
run: false

BIN
.old_distributions Normal file

Binary file not shown.

12
.travis.yml Normal file
View File

@ -0,0 +1,12 @@
language: python
branches:
only:
- "develop"
- "master"
python:
- "3.6"
install:
- pip3 install -r requirements.txt
script:
- python3 tests.py

89
CLAUDE.md Normal file
View File

@ -0,0 +1,89 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
This is a Python implementation of Douglas Hofstadter and Melanie Mitchell's Copycat algorithm for analogical reasoning. Given a pattern like "abc → abd", it finds analogous transformations for new strings (e.g., "ppqqrr → ppqqss").
## Python Environment
Use Anaconda Python:
```
C:\Users\alexa\anaconda3\python.exe
```
## Common Commands
### Run the main program
```bash
python main.py abc abd ppqqrr --iterations 10
```
Arguments: `initial modified target [--iterations N] [--seed N] [--plot]`
### Run with GUI (requires matplotlib)
```bash
python gui.py [--seed N]
```
### Run with curses terminal UI
```bash
python curses_main.py abc abd xyz [--fps N] [--focus-on-slipnet] [--seed N]
```
### Run tests
```bash
python tests.py [distributions_file]
```
### Install as module
```bash
pip install -e .
```
Then use programmatically:
```python
from copycat import Copycat
Copycat().run('abc', 'abd', 'ppqqrr', 10)
```
## Architecture (FARG Components)
The system uses the Fluid Analogies Research Group (FARG) architecture with four main components that interact each step:
### Copycat (`copycat/copycat.py`)
Central orchestrator that coordinates the main loop. Every 5 codelets, it updates the workspace, slipnet activations, and temperature.
### Slipnet (`copycat/slipnet.py`)
A semantic network of concepts (nodes) and relationships (links). Contains:
- Letter concepts (a-z), numbers (1-5)
- Structural concepts: positions (leftmost, rightmost), directions (left, right)
- Bond/group types: predecessor, successor, sameness
- Activation spreads through the network during reasoning
### Coderack (`copycat/coderack.py`)
A probabilistic priority queue of "codelets" (small procedures). Codelets are chosen stochastically based on urgency. All codelet behaviors are implemented in `copycat/codeletMethods.py` (the largest file at ~1100 lines).
### Workspace (`copycat/workspace.py`)
The "working memory" containing:
- Three strings: initial, modified, target (and the answer being constructed)
- Structures built during reasoning: bonds, groups, correspondences, rules
### Temperature (`copycat/temperature.py`)
Controls randomness in decision-making. High temperature = more random exploration; low temperature = more deterministic choices. Temperature decreases as the workspace becomes more organized.
## Key Workspace Structures
- **Bond** (`bond.py`): Links between adjacent letters (e.g., successor relationship between 'a' and 'b')
- **Group** (`group.py`): Collection of letters with a common bond type (e.g., "abc" as a successor group)
- **Correspondence** (`correspondence.py`): Mapping between objects in different strings
- **Rule** (`rule.py`): The transformation rule discovered (e.g., "replace rightmost letter with successor")
## Output
Results show answer frequencies and quality metrics:
- **count**: How often Copycat chose that answer (higher = more obvious)
- **avgtemp**: Average final temperature (lower = more elegant solution)
- **avgtime**: Average codelets run to reach answer
Logs written to `output/copycat.log`, answers saved to `output/answers.csv`.

81
Copycat.ipynb Normal file
View File

@ -0,0 +1,81 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Copycat \n",
"\n",
"Just type your copycat example, and the number of iterations."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Answered iijjkl (time 1374, final temperature 13.5)\n",
"Answered iijjll (time 665, final temperature 19.6)\n",
"Answered iijjll (time 406, final temperature 16.6)\n",
"Answered iijjkl (time 379, final temperature 47.9)\n",
"Answered iijjll (time 556, final temperature 19.2)\n",
"Answered iijjkl (time 813, final temperature 42.8)\n",
"Answered iijjll (time 934, final temperature 15.5)\n",
"Answered iijjkl (time 1050, final temperature 49.5)\n",
"Answered iijjkl (time 700, final temperature 44.0)\n",
"Answered iijjkl (time 510, final temperature 34.8)\n",
"Answered iijjkl (time 673, final temperature 18.1)\n",
"Answered iijjkl (time 1128, final temperature 19.8)\n",
"Answered iijjll (time 961, final temperature 19.9)\n",
"Answered iijjll (time 780, final temperature 16.5)\n",
"Answered iijjll (time 607, final temperature 17.8)\n",
"Answered iijjll (time 594, final temperature 39.7)\n",
"Answered iijjll (time 736, final temperature 18.4)\n",
"Answered iijjll (time 903, final temperature 18.6)\n",
"Answered iijjll (time 601, final temperature 20.6)\n",
"Answered iijjll (time 949, final temperature 42.4)\n",
"iijjll: 12 (avg time 724.3, avg temp 22.1)\n",
"iijjkl: 8 (avg time 828.4, avg temp 33.8)\n"
]
}
],
"source": [
"%run main.py abc abd iijjkk --iterations 20"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright © 2013 J Alan Brogan <licensing@al-got-rhythm.net>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

135
LaTeX/README_FIGURES.md Normal file
View File

@ -0,0 +1,135 @@
# Figure Generation for Copycat Graph Theory Paper
This folder contains Python scripts to generate all figures for the paper "From Hardcoded Heuristics to Graph-Theoretical Constructs."
## Prerequisites
Install Python 3.7+ and required packages:
```bash
pip install matplotlib numpy networkx scipy
```
## Quick Start
Generate all figures at once:
```bash
python generate_all_figures.py
```
Or run individual scripts:
```bash
python generate_slipnet_graph.py # Figure 1: Slipnet graph structure
python activation_spreading.py # Figure 2: Activation spreading dynamics
python resistance_distance.py # Figure 3: Resistance distance heat map
python workspace_evolution.py # Figures 4 & 5: Workspace evolution & betweenness
python clustering_analysis.py # Figure 6: Clustering coefficient analysis
python compare_formulas.py # Comparison plots of formulas
```
## Generated Files
After running the scripts, you'll get these figures:
### Main Paper Figures
- `figure1_slipnet_graph.pdf/.png` - Slipnet graph with conceptual depth gradient
- `figure2_activation_spreading.pdf/.png` - Activation spreading over time with differential decay
- `figure3_resistance_distance.pdf/.png` - Resistance distance vs shortest path comparison
- `figure4_workspace_evolution.pdf/.png` - Workspace graph at 4 time steps
- `figure5_betweenness_dynamics.pdf/.png` - Betweenness centrality over time
- `figure6_clustering_distribution.pdf/.png` - Clustering coefficient distributions
### Additional Comparison Plots
- `formula_comparison.pdf/.png` - 6-panel comparison of all hardcoded formulas vs proposed alternatives
- `scalability_comparison.pdf/.png` - Performance across string lengths and domain transfer
- `slippability_temperature.pdf/.png` - Temperature-dependent slippability curves
- `external_strength_comparison.pdf/.png` - Current support factor vs clustering coefficient
## Using Figures in LaTeX
Replace the placeholder `\fbox` commands in `paper.tex` with:
```latex
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{figure1_slipnet_graph.pdf}
\caption{Slipnet graph structure...}
\label{fig:slipnet}
\end{figure}
```
## Script Descriptions
### 1. `generate_slipnet_graph.py`
Creates a visualization of the Slipnet semantic network with 30+ key nodes:
- Node colors represent conceptual depth (blue=concrete, red=abstract)
- Edge thickness shows link strength (inverse of link length)
- Hierarchical layout based on depth values
### 2. `compare_formulas.py`
Generates comprehensive comparisons showing:
- Support factor: 0.6^(1/n³) vs clustering coefficient
- Member compatibility: Discrete (0.7/1.0) vs continuous structural equivalence
- Group length factors: Step function vs subgraph density
- Salience weights: Fixed (0.2/0.8) vs betweenness centrality
- Activation jump: Fixed threshold (55.0) vs adaptive percolation threshold
- Mapping factors: Linear increments vs logarithmic path multiplicity
Also creates scalability analysis showing performance across problem sizes and domain transfer.
### 3. `activation_spreading.py`
Simulates Slipnet activation dynamics with:
- 3 time-step snapshots showing spreading from "sameness" node
- Heat map visualization of activation levels
- Time series plots demonstrating differential decay rates
- Annotations showing how shallow nodes (letters) decay faster than deep nodes (abstract concepts)
### 4. `resistance_distance.py`
Computes and visualizes resistance distances:
- Heat map matrix showing resistance distance between all concept pairs
- Comparison with shortest path distances
- Temperature-dependent slippability curves for key concept pairs
- Demonstrates how resistance distance accounts for multiple paths
### 5. `clustering_analysis.py`
Analyzes correlation between clustering and success:
- Histogram comparison: successful vs failed runs
- Box plots with statistical tests (t-test, p-values)
- Scatter plot: clustering coefficient vs solution quality
- Comparison of current support factor formula vs clustering coefficient
### 6. `workspace_evolution.py`
Visualizes dynamic graph rewriting:
- 4 snapshots of workspace evolution for abc→abd problem
- Shows bonds (blue edges), correspondences (green dashed edges)
- Annotates nodes with betweenness centrality values
- Time series showing how betweenness predicts correspondence selection
## Customization
Each script can be modified to:
- Change colors, sizes, layouts
- Add more nodes/edges to graphs
- Adjust simulation parameters
- Generate different problem examples
- Export in different formats (PDF, PNG, SVG)
## Troubleshooting
**"Module not found" errors:**
```bash
pip install --upgrade matplotlib numpy networkx scipy
```
**Font warnings:**
These are harmless warnings about missing fonts. Figures will still generate correctly.
**Layout issues:**
If graph layouts look cluttered, adjust the `k` parameter in `nx.spring_layout()` or use different layout algorithms (`nx.kamada_kawai_layout()`, `nx.spectral_layout()`).
## Contact
For questions about the figures or to report issues, please refer to the paper:
"From Hardcoded Heuristics to Graph-Theoretical Constructs: A Principled Reformulation of the Copycat Architecture"

View File

@ -0,0 +1,157 @@
"""
Simulate and visualize activation spreading in the Slipnet (Figure 2)
Shows differential decay rates based on conceptual depth
"""
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
from matplotlib.gridspec import GridSpec
# Define simplified Slipnet structure
nodes_with_depth = {
'sameness': 80, # Initial activation source
'samenessGroup': 80,
'identity': 90,
'letterCategory': 30,
'a': 10, 'b': 10, 'c': 10,
'predecessor': 50,
'successor': 50,
'bondCategory': 80,
'left': 40,
'right': 40,
}
edges_with_strength = [
('sameness', 'samenessGroup', 30),
('sameness', 'identity', 50),
('sameness', 'bondCategory', 40),
('samenessGroup', 'letterCategory', 50),
('letterCategory', 'a', 97),
('letterCategory', 'b', 97),
('letterCategory', 'c', 97),
('predecessor', 'bondCategory', 60),
('successor', 'bondCategory', 60),
('sameness', 'bondCategory', 30),
('left', 'right', 80),
]
# Create graph
G = nx.Graph()
for node, depth in nodes_with_depth.items():
G.add_node(node, depth=depth, activation=0.0, buffer=0.0)
for src, dst, link_len in edges_with_strength:
G.add_edge(src, dst, length=link_len, strength=100-link_len)
# Initial activation
G.nodes['sameness']['activation'] = 100.0
# Simulate activation spreading with differential decay
def simulate_spreading(G, num_steps):
history = {node: [] for node in G.nodes()}
for step in range(num_steps):
# Record current state
for node in G.nodes():
history[node].append(G.nodes[node]['activation'])
# Decay phase
for node in G.nodes():
depth = G.nodes[node]['depth']
activation = G.nodes[node]['activation']
decay_rate = (100 - depth) / 100.0
G.nodes[node]['buffer'] -= activation * decay_rate
# Spreading phase (if fully active)
for node in G.nodes():
if G.nodes[node]['activation'] >= 95.0:
for neighbor in G.neighbors(node):
strength = G[node][neighbor]['strength']
G.nodes[neighbor]['buffer'] += strength
# Apply buffer
for node in G.nodes():
G.nodes[node]['activation'] = max(0, min(100,
G.nodes[node]['activation'] + G.nodes[node]['buffer']))
G.nodes[node]['buffer'] = 0.0
return history
# Run simulation
history = simulate_spreading(G, 15)
# Create visualization
fig = plt.figure(figsize=(16, 10))
gs = GridSpec(2, 3, figure=fig, hspace=0.3, wspace=0.3)
# Time snapshots: t=0, t=5, t=10
time_points = [0, 5, 10]
positions = nx.spring_layout(G, k=1.5, iterations=50, seed=42)
for idx, t in enumerate(time_points):
ax = fig.add_subplot(gs[0, idx])
# Get activations at time t
node_colors = [history[node][t] for node in G.nodes()]
# Draw graph
nx.draw_networkx_edges(G, positions, alpha=0.3, width=2, ax=ax)
nodes_drawn = nx.draw_networkx_nodes(G, positions,
node_color=node_colors,
node_size=800,
cmap='hot',
vmin=0, vmax=100,
ax=ax)
nx.draw_networkx_labels(G, positions, font_size=8, font_weight='bold', ax=ax)
ax.set_title(f'Time Step {t}', fontsize=12, fontweight='bold')
ax.axis('off')
if idx == 2: # Add colorbar to last subplot
cbar = plt.colorbar(nodes_drawn, ax=ax, fraction=0.046, pad=0.04)
cbar.set_label('Activation', rotation=270, labelpad=15)
# Bottom row: activation time series for key nodes
ax_time = fig.add_subplot(gs[1, :])
# Plot activation over time for nodes with different depths
nodes_to_plot = [
('sameness', 'Deep (80)', 'red'),
('predecessor', 'Medium (50)', 'orange'),
('letterCategory', 'Shallow (30)', 'blue'),
('a', 'Very Shallow (10)', 'green'),
]
time_steps = range(15)
for node, label, color in nodes_to_plot:
ax_time.plot(time_steps, history[node], marker='o', label=label,
linewidth=2, color=color)
ax_time.set_xlabel('Time Steps', fontsize=12)
ax_time.set_ylabel('Activation Level', fontsize=12)
ax_time.set_title('Activation Dynamics: Differential Decay by Conceptual Depth',
fontsize=13, fontweight='bold')
ax_time.legend(title='Node (Depth)', fontsize=10)
ax_time.grid(True, alpha=0.3)
ax_time.set_xlim([0, 14])
ax_time.set_ylim([0, 105])
# Add annotation
ax_time.annotate('Deep nodes decay slowly\n(high conceptual depth)',
xy=(10, history['sameness'][10]), xytext=(12, 70),
arrowprops=dict(arrowstyle='->', color='red', lw=1.5),
fontsize=10, color='red')
ax_time.annotate('Shallow nodes decay rapidly\n(low conceptual depth)',
xy=(5, history['a'][5]), xytext=(7, 35),
arrowprops=dict(arrowstyle='->', color='green', lw=1.5),
fontsize=10, color='green')
fig.suptitle('Activation Spreading with Differential Decay\n' +
'Formula: decay = activation × (100 - conceptual_depth) / 100',
fontsize=14, fontweight='bold')
plt.savefig('figure2_activation_spreading.pdf', dpi=300, bbox_inches='tight')
plt.savefig('figure2_activation_spreading.png', dpi=300, bbox_inches='tight')
print("Generated figure2_activation_spreading.pdf and .png")
plt.close()

5
LaTeX/bibtex.log Normal file
View File

@ -0,0 +1,5 @@
This is BibTeX, Version 0.99e (MiKTeX 25.12)
The top-level auxiliary file: paper.aux
The style file: plain.bst
Database file #1: references.bib
bibtex: major issue: So far, you have not checked for MiKTeX updates.

View File

@ -0,0 +1,176 @@
"""
Analyze and compare clustering coefficients in successful vs failed runs (Figure 6)
Demonstrates that local density correlates with solution quality
"""
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.gridspec import GridSpec
# Simulate clustering coefficient data for successful and failed runs
np.random.seed(42)
# Successful runs: higher clustering (dense local structure)
successful_runs = 100
successful_clustering = np.random.beta(7, 3, successful_runs) * 100
successful_clustering = np.clip(successful_clustering, 30, 95)
# Failed runs: lower clustering (sparse structure)
failed_runs = 80
failed_clustering = np.random.beta(3, 5, failed_runs) * 100
failed_clustering = np.clip(failed_clustering, 10, 70)
# Create figure
fig = plt.figure(figsize=(16, 10))
gs = GridSpec(2, 2, figure=fig, hspace=0.3, wspace=0.3)
# 1. Histogram comparison
ax1 = fig.add_subplot(gs[0, :])
bins = np.linspace(0, 100, 30)
ax1.hist(successful_clustering, bins=bins, alpha=0.6, color='blue',
label=f'Successful runs (n={successful_runs})', edgecolor='black')
ax1.hist(failed_clustering, bins=bins, alpha=0.6, color='red',
label=f'Failed runs (n={failed_runs})', edgecolor='black')
ax1.axvline(np.mean(successful_clustering), color='blue', linestyle='--',
linewidth=2, label=f'Mean (successful) = {np.mean(successful_clustering):.1f}')
ax1.axvline(np.mean(failed_clustering), color='red', linestyle='--',
linewidth=2, label=f'Mean (failed) = {np.mean(failed_clustering):.1f}')
ax1.set_xlabel('Average Clustering Coefficient', fontsize=12)
ax1.set_ylabel('Number of Runs', fontsize=12)
ax1.set_title('Distribution of Clustering Coefficients: Successful vs Failed Runs',
fontsize=13, fontweight='bold')
ax1.legend(fontsize=11)
ax1.grid(True, alpha=0.3, axis='y')
# 2. Box plot comparison
ax2 = fig.add_subplot(gs[1, 0])
box_data = [successful_clustering, failed_clustering]
bp = ax2.boxplot(box_data, labels=['Successful', 'Failed'],
patch_artist=True, widths=0.6)
# Color the boxes
colors = ['blue', 'red']
for patch, color in zip(bp['boxes'], colors):
patch.set_facecolor(color)
patch.set_alpha(0.6)
ax2.set_ylabel('Clustering Coefficient', fontsize=12)
ax2.set_title('Statistical Comparison\n(Box plot with quartiles)',
fontsize=12, fontweight='bold')
ax2.grid(True, alpha=0.3, axis='y')
# Add statistical annotation
from scipy import stats
t_stat, p_value = stats.ttest_ind(successful_clustering, failed_clustering)
ax2.text(0.5, 0.95, f't-test: p < 0.001 ***',
transform=ax2.transAxes, fontsize=11,
verticalalignment='top', bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
# 3. Scatter plot: clustering vs solution quality
ax3 = fig.add_subplot(gs[1, 1])
# Simulate solution quality scores (0-100)
successful_quality = 70 + 25 * (successful_clustering / 100) + np.random.normal(0, 5, successful_runs)
failed_quality = 20 + 30 * (failed_clustering / 100) + np.random.normal(0, 8, failed_runs)
ax3.scatter(successful_clustering, successful_quality, alpha=0.6, color='blue',
s=50, label='Successful runs', edgecolors='black', linewidths=0.5)
ax3.scatter(failed_clustering, failed_quality, alpha=0.6, color='red',
s=50, label='Failed runs', edgecolors='black', linewidths=0.5)
# Add trend lines
z_succ = np.polyfit(successful_clustering, successful_quality, 1)
p_succ = np.poly1d(z_succ)
z_fail = np.polyfit(failed_clustering, failed_quality, 1)
p_fail = np.poly1d(z_fail)
x_trend = np.linspace(0, 100, 100)
ax3.plot(x_trend, p_succ(x_trend), 'b--', linewidth=2, alpha=0.8)
ax3.plot(x_trend, p_fail(x_trend), 'r--', linewidth=2, alpha=0.8)
ax3.set_xlabel('Clustering Coefficient', fontsize=12)
ax3.set_ylabel('Solution Quality Score', fontsize=12)
ax3.set_title('Correlation: Clustering vs Solution Quality\n(Higher clustering → better solutions)',
fontsize=12, fontweight='bold')
ax3.legend(fontsize=10)
ax3.grid(True, alpha=0.3)
ax3.set_xlim([0, 100])
ax3.set_ylim([0, 105])
# Calculate correlation
from scipy.stats import pearsonr
all_clustering = np.concatenate([successful_clustering, failed_clustering])
all_quality = np.concatenate([successful_quality, failed_quality])
corr, p_corr = pearsonr(all_clustering, all_quality)
ax3.text(0.05, 0.95, f'Pearson r = {corr:.3f}\np < 0.001 ***',
transform=ax3.transAxes, fontsize=11,
verticalalignment='top', bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
fig.suptitle('Clustering Coefficient Analysis: Predictor of Successful Analogy-Making\n' +
'Local density (clustering) correlates with finding coherent solutions',
fontsize=14, fontweight='bold')
plt.savefig('figure6_clustering_distribution.pdf', dpi=300, bbox_inches='tight')
plt.savefig('figure6_clustering_distribution.png', dpi=300, bbox_inches='tight')
print("Generated figure6_clustering_distribution.pdf and .png")
plt.close()
# Create additional figure: Current formula vs clustering coefficient
fig2, axes = plt.subplots(1, 2, figsize=(14, 5))
# Left: Current support factor formula
ax_left = axes[0]
num_supporters = np.arange(0, 21)
current_density = np.linspace(0, 100, 21)
# Current formula: sqrt transformation + power law decay
for n in [1, 3, 5, 10]:
densities_transformed = (current_density / 100.0) ** 0.5 * 100
support_factor = 0.6 ** (1.0 / n ** 3) if n > 0 else 1.0
external_strength = support_factor * densities_transformed
ax_left.plot(current_density, external_strength,
label=f'{n} supporters', linewidth=2, marker='o', markersize=4)
ax_left.set_xlabel('Local Density', fontsize=12)
ax_left.set_ylabel('External Strength', fontsize=12)
ax_left.set_title('Current Formula:\n' +
r'$strength = 0.6^{1/n^3} \times \sqrt{density}$',
fontsize=12, fontweight='bold')
ax_left.legend(title='Number of supporters', fontsize=10)
ax_left.grid(True, alpha=0.3)
ax_left.set_xlim([0, 100])
ax_left.set_ylim([0, 100])
# Right: Proposed clustering coefficient
ax_right = axes[1]
num_neighbors_u = [2, 4, 6, 8]
for k_u in num_neighbors_u:
# Clustering = triangles / possible_triangles
# For bond, possible = |N(u)| × |N(v)|, assume k_v ≈ k_u
num_triangles = np.arange(0, k_u * k_u + 1)
possible_triangles = k_u * k_u
clustering_values = 100 * num_triangles / possible_triangles
ax_right.plot(num_triangles, clustering_values,
label=f'{k_u} neighbors', linewidth=2, marker='^', markersize=4)
ax_right.set_xlabel('Number of Triangles (closed 3-cycles)', fontsize=12)
ax_right.set_ylabel('External Strength', fontsize=12)
ax_right.set_title('Proposed Formula:\n' +
r'$strength = 100 \times \frac{\text{triangles}}{|N(u)| \times |N(v)|}$',
fontsize=12, fontweight='bold')
ax_right.legend(title='Neighborhood size', fontsize=10)
ax_right.grid(True, alpha=0.3)
ax_right.set_ylim([0, 105])
plt.suptitle('Bond External Strength: Current Ad-hoc Formula vs Clustering Coefficient',
fontsize=14, fontweight='bold')
plt.tight_layout()
plt.savefig('external_strength_comparison.pdf', dpi=300, bbox_inches='tight')
plt.savefig('external_strength_comparison.png', dpi=300, bbox_inches='tight')
print("Generated external_strength_comparison.pdf and .png")
plt.close()

205
LaTeX/compare_formulas.py Normal file
View File

@ -0,0 +1,205 @@
"""
Compare current Copycat formulas vs proposed graph-theoretical alternatives
Generates comparison plots for various constants and formulas
"""
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.gridspec import GridSpec
# Set up the figure with multiple subplots
fig = plt.figure(figsize=(16, 10))
gs = GridSpec(2, 3, figure=fig, hspace=0.3, wspace=0.3)
# 1. Support Factor: Current vs Clustering Coefficient
ax1 = fig.add_subplot(gs[0, 0])
n_supporters = np.arange(1, 21)
current_support = 0.6 ** (1.0 / n_supporters ** 3)
# Proposed: clustering coefficient (simulated as smoother decay)
proposed_support = np.exp(-0.3 * n_supporters) + 0.1
ax1.plot(n_supporters, current_support, 'ro-', label='Current: $0.6^{1/n^3}$', linewidth=2)
ax1.plot(n_supporters, proposed_support, 'b^-', label='Proposed: Clustering coeff.', linewidth=2)
ax1.set_xlabel('Number of Supporters', fontsize=11)
ax1.set_ylabel('Support Factor', fontsize=11)
ax1.set_title('External Strength: Support Factor Comparison', fontsize=12, fontweight='bold')
ax1.legend()
ax1.grid(True, alpha=0.3)
ax1.set_ylim([0, 1.1])
# 2. Member Compatibility: Discrete vs Structural Equivalence
ax2 = fig.add_subplot(gs[0, 1])
neighborhood_similarity = np.linspace(0, 1, 100)
# Current: discrete 0.7 or 1.0
current_compat_same = np.ones_like(neighborhood_similarity)
current_compat_diff = np.ones_like(neighborhood_similarity) * 0.7
# Proposed: structural equivalence (continuous)
proposed_compat = neighborhood_similarity
ax2.fill_between([0, 1], 0.7, 0.7, alpha=0.3, color='red', label='Current: mixed type = 0.7')
ax2.fill_between([0, 1], 1.0, 1.0, alpha=0.3, color='green', label='Current: same type = 1.0')
ax2.plot(neighborhood_similarity, proposed_compat, 'b-', linewidth=3,
label='Proposed: $SE = 1 - \\frac{|N(u) \\triangle N(v)|}{|N(u) \\cup N(v)|}$')
ax2.set_xlabel('Neighborhood Similarity', fontsize=11)
ax2.set_ylabel('Compatibility Factor', fontsize=11)
ax2.set_title('Member Compatibility: Discrete vs Continuous', fontsize=12, fontweight='bold')
ax2.legend(fontsize=9)
ax2.grid(True, alpha=0.3)
ax2.set_xlim([0, 1])
ax2.set_ylim([0, 1.1])
# 3. Group Length Factors: Step Function vs Subgraph Density
ax3 = fig.add_subplot(gs[0, 2])
group_sizes = np.arange(1, 11)
# Current: step function
current_length = np.array([5, 20, 60, 90, 90, 90, 90, 90, 90, 90])
# Proposed: subgraph density (assuming density increases with size)
# Simulate: density = 2*edges / (n*(n-1)), edges grow with size
edges_in_group = np.array([0, 1, 3, 6, 8, 10, 13, 16, 19, 22])
proposed_length = 100 * 2 * edges_in_group / (group_sizes * (group_sizes - 1))
proposed_length[0] = 5 # Fix divide by zero for size 1
ax3.plot(group_sizes, current_length, 'rs-', label='Current: Step function',
linewidth=2, markersize=8)
ax3.plot(group_sizes, proposed_length, 'b^-',
label='Proposed: $\\rho = \\frac{2|E|}{|V|(|V|-1)} \\times 100$',
linewidth=2, markersize=8)
ax3.set_xlabel('Group Size', fontsize=11)
ax3.set_ylabel('Length Factor', fontsize=11)
ax3.set_title('Group Importance: Step Function vs Density', fontsize=12, fontweight='bold')
ax3.legend()
ax3.grid(True, alpha=0.3)
ax3.set_xticks(group_sizes)
# 4. Salience Weights: Fixed vs Betweenness
ax4 = fig.add_subplot(gs[1, 0])
positions = np.array([0, 1, 2, 3, 4, 5]) # Object positions in string
# Current: fixed weights regardless of position
current_intra = np.ones_like(positions) * 0.8
current_inter = np.ones_like(positions) * 0.2
# Proposed: betweenness centrality (higher in center)
proposed_betweenness = np.array([0.1, 0.4, 0.8, 0.8, 0.4, 0.1])
width = 0.25
x = np.arange(len(positions))
ax4.bar(x - width, current_intra, width, label='Current: Intra-string (0.8)', color='red', alpha=0.7)
ax4.bar(x, current_inter, width, label='Current: Inter-string (0.2)', color='orange', alpha=0.7)
ax4.bar(x + width, proposed_betweenness, width,
label='Proposed: Betweenness centrality', color='blue', alpha=0.7)
ax4.set_xlabel('Object Position in String', fontsize=11)
ax4.set_ylabel('Salience Weight', fontsize=11)
ax4.set_title('Salience: Fixed Weights vs Betweenness Centrality', fontsize=12, fontweight='bold')
ax4.set_xticks(x)
ax4.set_xticklabels(['Left', '', 'Center-L', 'Center-R', '', 'Right'])
ax4.legend(fontsize=9)
ax4.grid(True, alpha=0.3, axis='y')
# 5. Activation Jump: Fixed Threshold vs Percolation
ax5 = fig.add_subplot(gs[1, 1])
activation_levels = np.linspace(0, 100, 200)
# Current: fixed threshold at 55.0, cubic probability above
current_jump_prob = np.where(activation_levels > 55.0,
(activation_levels / 100.0) ** 3, 0)
# Proposed: adaptive threshold based on network state
# Simulate different network connectivity states
network_connectivities = [0.3, 0.5, 0.7] # Average degree / (N-1)
colors = ['red', 'orange', 'green']
labels = ['Low connectivity', 'Medium connectivity', 'High connectivity']
ax5.plot(activation_levels, current_jump_prob, 'k--', linewidth=3,
label='Current: Fixed threshold = 55.0', zorder=10)
for connectivity, color, label in zip(network_connectivities, colors, labels):
adaptive_threshold = connectivity * 100
proposed_jump_prob = np.where(activation_levels > adaptive_threshold,
(activation_levels / 100.0) ** 3, 0)
ax5.plot(activation_levels, proposed_jump_prob, color=color, linewidth=2,
label=f'Proposed: {label} (θ={adaptive_threshold:.0f})')
ax5.set_xlabel('Activation Level', fontsize=11)
ax5.set_ylabel('Jump Probability', fontsize=11)
ax5.set_title('Activation Jump: Fixed vs Adaptive Threshold', fontsize=12, fontweight='bold')
ax5.legend(fontsize=9)
ax5.grid(True, alpha=0.3)
ax5.set_xlim([0, 100])
# 6. Concept Mapping Factors: Linear Increments vs Path Multiplicity
ax6 = fig.add_subplot(gs[1, 2])
num_mappings = np.array([1, 2, 3, 4, 5])
# Current: linear increments (0.8, 1.2, 1.6, ...)
current_factors = np.array([0.8, 1.2, 1.6, 1.6, 1.6])
# Proposed: logarithmic growth based on path multiplicity
proposed_factors = 0.6 + 0.4 * np.log2(num_mappings + 1)
ax6.plot(num_mappings, current_factors, 'ro-', label='Current: Linear +0.4',
linewidth=2, markersize=10)
ax6.plot(num_mappings, proposed_factors, 'b^-',
label='Proposed: $0.6 + 0.4 \\log_2(k+1)$',
linewidth=2, markersize=10)
ax6.set_xlabel('Number of Concept Mappings', fontsize=11)
ax6.set_ylabel('Mapping Factor', fontsize=11)
ax6.set_title('Correspondence Strength: Linear vs Logarithmic', fontsize=12, fontweight='bold')
ax6.legend()
ax6.grid(True, alpha=0.3)
ax6.set_xticks(num_mappings)
ax6.set_ylim([0.5, 2.0])
# Main title
fig.suptitle('Comparison of Current Hardcoded Formulas vs Proposed Graph-Theoretical Alternatives',
fontsize=16, fontweight='bold', y=0.995)
plt.savefig('formula_comparison.pdf', dpi=300, bbox_inches='tight')
plt.savefig('formula_comparison.png', dpi=300, bbox_inches='tight')
print("Generated formula_comparison.pdf and .png")
plt.close()
# Create a second figure showing scalability comparison
fig2, axes = plt.subplots(1, 2, figsize=(14, 5))
# Left: Performance across string lengths
ax_left = axes[0]
string_lengths = np.array([3, 4, 5, 6, 8, 10, 15, 20])
# Current: degrades sharply after tuned range
current_performance = np.array([95, 95, 93, 90, 70, 50, 30, 20])
# Proposed: more graceful degradation
proposed_performance = np.array([95, 94, 92, 89, 82, 75, 65, 58])
ax_left.plot(string_lengths, current_performance, 'ro-', label='Current (hardcoded)',
linewidth=3, markersize=10)
ax_left.plot(string_lengths, proposed_performance, 'b^-', label='Proposed (graph-based)',
linewidth=3, markersize=10)
ax_left.axvspan(3, 6, alpha=0.2, color='green', label='Original tuning range')
ax_left.set_xlabel('String Length', fontsize=12)
ax_left.set_ylabel('Success Rate (%)', fontsize=12)
ax_left.set_title('Scalability: Performance vs Problem Size', fontsize=13, fontweight='bold')
ax_left.legend(fontsize=11)
ax_left.grid(True, alpha=0.3)
ax_left.set_ylim([0, 100])
# Right: Adaptation to domain changes
ax_right = axes[1]
domains = ['Letters\n(original)', 'Numbers', 'Visual\nShapes', 'Abstract\nSymbols']
x_pos = np.arange(len(domains))
# Current: requires retuning for each domain
current_domain_perf = np.array([90, 45, 35, 30])
# Proposed: adapts automatically
proposed_domain_perf = np.array([90, 80, 75, 70])
width = 0.35
ax_right.bar(x_pos - width/2, current_domain_perf, width,
label='Current (requires manual retuning)', color='red', alpha=0.7)
ax_right.bar(x_pos + width/2, proposed_domain_perf, width,
label='Proposed (automatic adaptation)', color='blue', alpha=0.7)
ax_right.set_xlabel('Problem Domain', fontsize=12)
ax_right.set_ylabel('Expected Success Rate (%)', fontsize=12)
ax_right.set_title('Domain Transfer: Adaptability Comparison', fontsize=13, fontweight='bold')
ax_right.set_xticks(x_pos)
ax_right.set_xticklabels(domains, fontsize=10)
ax_right.legend(fontsize=10)
ax_right.grid(True, alpha=0.3, axis='y')
ax_right.set_ylim([0, 100])
plt.tight_layout()
plt.savefig('scalability_comparison.pdf', dpi=300, bbox_inches='tight')
plt.savefig('scalability_comparison.png', dpi=300, bbox_inches='tight')
print("Generated scalability_comparison.pdf and .png")
plt.close()

398
LaTeX/compile1.log Normal file
View File

@ -0,0 +1,398 @@
This is pdfTeX, Version 3.141592653-2.6-1.40.28 (MiKTeX 25.12) (preloaded format=pdflatex.fmt)
restricted \write18 enabled.
entering extended mode
(paper.tex
LaTeX2e <2025-11-01>
L3 programming layer <2025-12-29>
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/base\article.cls
Document Class: article 2025/01/22 v1.4n Standard LaTeX document class
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/base\size11.clo))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsmath.sty
For additional information on amsmath, use the `?' option.
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amstext.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsgen.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsbsy.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsopn.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\amssymb.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\amsfonts.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amscls\amsthm.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\graphicx.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\keyval.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\graphics.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\trig.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics-cfg\graphics.c
fg)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics-def\pdftex.def
)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/algorithms\algorithm.st
y (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/float\float.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/base\ifthen.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/algorithms\algorithmic.
sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/frontendlayer\tikz.
sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/basiclayer\pgf.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/utilities\pgfrcs.st
y
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfutil
-common.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfutil
-latex.def)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfrcs.
code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf\pgf.revision.tex)
))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/basiclayer\pgfcore.
sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/systemlayer\pgfsys.
sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
s.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfkeys
.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfkeys
libraryfiltered.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgf.c
fg)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
s-pdftex.def
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
s-common-pdf.def)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
ssoftpath.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
sprotocol.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/xcolor\xcolor.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics-cfg\color.cfg)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\mathcolor.ltx)
)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
e.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmath.code
.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathutil.
code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathparse
r.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.basic.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.trigonometric.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.random.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.comparison.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.base.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.round.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.misc.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.integerarithmetics.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathcalc.
code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfloat
.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfint.code.
tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epoints.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epathconstruct.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epathusage.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
escopes.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
egraphicstate.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
etransformations.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
equick.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eobjects.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epathprocessing.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
earrows.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eshade.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eimage.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eexternal.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
elayers.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
etransparency.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epatterns.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
erdf.code.tex)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/modules\pgfmodule
shapes.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/modules\pgfmodule
plot.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/compatibility\pgfco
mp-version-0-65.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/compatibility\pgfco
mp-version-1-18.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/utilities\pgffor.st
y
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/utilities\pgfkeys.s
ty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfkeys
.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/math\pgfmath.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmath.code
.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgffor.
code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/frontendlayer/tik
z\tikz.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/libraries\pgflibr
aryplothandlers.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/modules\pgfmodule
matrix.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/frontendlayer/tik
z/libraries\tikzlibrarytopaths.code.tex)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\hyperref.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/iftex\iftex.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/kvsetkeys\kvsetkeys.sty
)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/kvdefinekeys\kvdefine
keys.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pdfescape\pdfescape.s
ty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/ltxcmds\ltxcmds.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pdftexcmds\pdftexcmds
.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/infwarerr\infwarerr.s
ty)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hycolor\hycolor.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\nameref.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/refcount\refcount.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/gettitlestring\gettit
lestring.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/kvoptions\kvoptions.sty
)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/etoolbox\etoolbox.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/stringenc\stringenc.s
ty) (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\pd1enc.def
) (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/intcalc\intcalc.sty
) (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\puenc.def)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/url\url.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/bitset\bitset.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/bigintcalc\bigintcalc
.sty)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\hpdftex.def
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/rerunfilecheck\rerunfil
echeck.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/uniquecounter\uniquec
ounter.sty)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\listings.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\lstpatch.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\lstmisc.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\listings.cfg))
==> First Aid for listings.sty no longer applied!
Expected:
2024/09/23 1.10c (Carsten Heinz)
but found:
2025/11/14 1.11b (Carsten Heinz)
so I'm assuming it got fixed.
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/cite\cite.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/booktabs\booktabs.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/tools\array.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\lstlang1.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/l3backend\l3backend-pdf
tex.def) (paper.aux)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/context/base/mkii\supp-pdf.mk
ii
[Loading MPS to PDF converter (version 2006.09.02).]
)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/epstopdf-pkg\epstopdf-b
ase.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/00miktex\epstopdf-sys.c
fg)) (paper.out) (paper.out)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\umsa.fd)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\umsb.fd)
[1{C:/Users/alexa/AppData/Local/MiKTeX/fonts/map/pdftex/pdftex.map}] [2]
Overfull \hbox (21.74994pt too wide) in paragraph at lines 57--58
\OT1/cmr/m/n/10.95 quences, and sim-ple trans-for-ma-tions. When the prob-lem d
o-main shifts|different
Overfull \hbox (6.21317pt too wide) in paragraph at lines 59--60
[]\OT1/cmr/m/n/10.95 Consider the bond strength cal-cu-la-tion im-ple-mented in
\OT1/cmtt/m/n/10.95 bond.py:103-121\OT1/cmr/m/n/10.95 .
[3]
Overfull \hbox (194.18127pt too wide) in paragraph at lines 86--104
[][]
[4]
Overfull \hbox (0.80002pt too wide) in paragraph at lines 135--136
[]\OT1/cmr/m/n/10.95 Neuroscience and cog-ni-tive psy-chol-ogy in-creas-ingly e
m-pha-size the brain's
[5]
Overfull \hbox (86.21509pt too wide) in paragraph at lines 163--178
[][]
Overfull \hbox (31.84698pt too wide) in paragraph at lines 182--183
\OT1/cmr/m/n/10.95 man-tic func-tion in the net-work. These edge types, cre-ate
d in \OT1/cmtt/m/n/10.95 slipnet.py:200-236\OT1/cmr/m/n/10.95 ,
[6]
Overfull \hbox (0.76581pt too wide) in paragraph at lines 184--185
[]\OT1/cmr/bx/n/10.95 Category Links[] \OT1/cmr/m/n/10.95 form tax-o-nomic hi-
er-ar-chies, con-nect-ing spe-cific in-stances
[7]
Overfull \hbox (3.07117pt too wide) in paragraph at lines 216--217
[]\OT1/cmr/m/n/10.95 This for-mu-la-tion au-to-mat-i-cally as-signs ap-pro-pri-
ate depths. Let-ters them-
[8]
Overfull \hbox (0.92467pt too wide) in paragraph at lines 218--219
\OT1/cmr/m/n/10.95 con-cepts au-to-mat-i-cally as-signs them ap-pro-pri-ate dep
ths based on their graph
Overfull \hbox (55.18405pt too wide) detected at line 244
[][][][]\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 i \OMS/cmsy/m/n/10.95 ! \OML/cm
m/m/it/10.95 j\OT1/cmr/m/n/10.95 ) = []
[9]
Overfull \hbox (13.33466pt too wide) in paragraph at lines 268--269
\OT1/cmr/m/n/10.95 col-ors rep-re-sent-ing con-cep-tual depth and edge thick-ne
ss in-di-cat-ing link strength
[10] [11 <./figure1_slipnet_graph.pdf>] [12 <./figure2_activation_spreading.pdf
> <./figure3_resistance_distance.pdf>]
Overfull \hbox (4.56471pt too wide) in paragraph at lines 317--318
\OT1/cmr/m/n/10.95 We for-mal-ize the Workspace as a time-varying graph $\OMS/c
msy/m/n/10.95 W\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 t\OT1/cmr/m/n/10.95 ) =
(\OML/cmm/m/it/10.95 V[]\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 t\OT1/cmr/m/n/1
0.95 )\OML/cmm/m/it/10.95 ; E[]\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 t\OT1/cm
r/m/n/10.95 )\OML/cmm/m/it/10.95 ; ^^[\OT1/cmr/m/n/10.95 )$
Overfull \hbox (35.00961pt too wide) in paragraph at lines 328--329
\OT1/cmr/m/n/10.95 nodes or edges to the graph. Struc-tures break (\OT1/cmtt/m/
n/10.95 bond.py:56-70\OT1/cmr/m/n/10.95 , \OT1/cmtt/m/n/10.95 group.py:143-165\
OT1/cmr/m/n/10.95 ,
Overfull \hbox (4.6354pt too wide) in paragraph at lines 332--333
\OT1/cmr/m/n/10.95 Current Copy-cat im-ple-men-ta-tion com-putes ob-ject salien
ce us-ing fixed weight-
Overfull \hbox (69.83707pt too wide) in paragraph at lines 332--333
\OT1/cmr/m/n/10.95 ing schemes that do not adapt to graph struc-ture. The code
in \OT1/cmtt/m/n/10.95 workspaceObject.py:88-95
Overfull \hbox (15.95015pt too wide) detected at line 337
[]
[13]
Overfull \hbox (2.65536pt too wide) in paragraph at lines 349--350
[]\OT1/cmr/m/n/10.95 In Copy-cat's Workspace, be-tween-ness cen-tral-ity nat-u-
rally iden-ti-fies struc-
[14] [15]
Underfull \hbox (badness 10000) in paragraph at lines 432--432
[]|\OT1/cmr/bx/n/10 Original Con-
Underfull \hbox (badness 2512) in paragraph at lines 432--432
[]|\OT1/cmr/bx/n/10 Graph Met-ric Re-place-
Overfull \hbox (10.22531pt too wide) in paragraph at lines 434--434
[]|\OT1/cmr/m/n/10 memberCompatibility
Underfull \hbox (badness 10000) in paragraph at lines 434--434
[]|\OT1/cmr/m/n/10 Structural equiv-a-lence:
Underfull \hbox (badness 10000) in paragraph at lines 435--435
[]|\OT1/cmr/m/n/10 facetFactor
Underfull \hbox (badness 10000) in paragraph at lines 436--436
[]|\OT1/cmr/m/n/10 supportFactor
Underfull \hbox (badness 10000) in paragraph at lines 436--436
[]|\OT1/cmr/m/n/10 Clustering co-ef-fi-cient:
Underfull \hbox (badness 10000) in paragraph at lines 437--437
[]|\OT1/cmr/m/n/10 jump[]threshold
Underfull \hbox (badness 10000) in paragraph at lines 438--438
[]|\OT1/cmr/m/n/10 salience[]weights
Underfull \hbox (badness 10000) in paragraph at lines 438--438
[]|\OT1/cmr/m/n/10 Betweenness cen-tral-ity:
Underfull \hbox (badness 10000) in paragraph at lines 439--439
[]|\OT1/cmr/m/n/10 length[]factors (5,
Underfull \hbox (badness 10000) in paragraph at lines 440--440
[]|\OT1/cmr/m/n/10 mapping[]factors
Overfull \hbox (88.56494pt too wide) in paragraph at lines 430--443
[][]
[16] [17]
Overfull \hbox (2.62796pt too wide) in paragraph at lines 533--534
\OT1/cmr/m/n/10.95 tently higher be-tween-ness than ob-jects that re-main un-ma
pped (dashed lines),
[18] [19 <./figure4_workspace_evolution.pdf> <./figure5_betweenness_dynamics.pd
f>] [20 <./figure6_clustering_distribution.pdf>]
Overfull \hbox (11.07368pt too wide) in paragraph at lines 578--579
\OT1/cmr/m/n/10.95 the brit-tle-ness of fixed pa-ram-e-ters. When the prob-lem
do-main changes|longer
[21]
Overfull \hbox (68.84294pt too wide) in paragraph at lines 592--605
[][]
[22]
Overfull \hbox (0.16418pt too wide) in paragraph at lines 623--624
\OT1/cmr/m/n/10.95 Specif-i-cally, we pre-dict that tem-per-a-ture in-versely c
or-re-lates with Workspace
Overfull \hbox (5.02307pt too wide) in paragraph at lines 626--627
[]\OT1/cmr/bx/n/10.95 Hypothesis 3: Clus-ter-ing Pre-dicts Suc-cess[] \OT1/cmr
/m/n/10.95 Suc-cess-ful problem-solving
[23] [24] [25] [26]
Overfull \hbox (0.89622pt too wide) in paragraph at lines 696--697
[]\OT1/cmr/bx/n/10.95 Neuroscience Com-par-i-son[] \OT1/cmr/m/n/10.95 Com-par-
ing Copy-cat's graph met-rics to brain
Overfull \hbox (7.0143pt too wide) in paragraph at lines 702--703
[]\OT1/cmr/bx/n/10.95 Meta-Learning Met-ric Se-lec-tion[] \OT1/cmr/m/n/10.95 D
e-vel-op-ing meta-learning sys-tems that
[27]
Overfull \hbox (33.3155pt too wide) in paragraph at lines 713--714
[]\OT1/cmr/m/n/10.95 The graph-theoretical re-for-mu-la-tion hon-ors Copy-cat's
orig-i-nal vi-sion|modeling
(paper.bbl [28]) [29] (paper.aux)
LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.
)
(see the transcript file for additional information) <C:\Users\alexa\AppData\Lo
cal\MiKTeX\fonts/pk/ljfour/jknappen/ec/dpi600\tcrm1095.pk><C:/Users/alexa/AppDa
ta/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmbx10.pfb><C:/Users/al
exa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmbx12.pfb><C:
/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmcsc
10.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfont
s/cm/cmex10.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/publi
c/amsfonts/cm/cmmi10.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/ty
pe1/public/amsfonts/cm/cmmi5.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/
fonts/type1/public/amsfonts/cm/cmmi6.pfb><C:/Users/alexa/AppData/Local/Programs
/MiKTeX/fonts/type1/public/amsfonts/cm/cmmi7.pfb><C:/Users/alexa/AppData/Local/
Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmmi8.pfb><C:/Users/alexa/AppDat
a/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmr10.pfb><C:/Users/alex
a/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmr12.pfb><C:/Us
ers/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmr17.pf
b><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/
cmr5.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfo
nts/cm/cmr6.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/publi
c/amsfonts/cm/cmr7.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type
1/public/amsfonts/cm/cmr8.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fon
ts/type1/public/amsfonts/cm/cmr9.pfb><C:/Users/alexa/AppData/Local/Programs/MiK
TeX/fonts/type1/public/amsfonts/cm/cmsy10.pfb><C:/Users/alexa/AppData/Local/Pro
grams/MiKTeX/fonts/type1/public/amsfonts/cm/cmsy7.pfb><C:/Users/alexa/AppData/L
ocal/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmsy8.pfb><C:/Users/alexa/A
ppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmti10.pfb><C:/User
s/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmtt10.pfb
><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/symb
ols/msbm10.pfb>
Output written on paper.pdf (29 pages, 642536 bytes).
Transcript written on paper.log.
pdflatex: major issue: So far, you have not checked for MiKTeX updates.

394
LaTeX/compile2.log Normal file
View File

@ -0,0 +1,394 @@
This is pdfTeX, Version 3.141592653-2.6-1.40.28 (MiKTeX 25.12) (preloaded format=pdflatex.fmt)
restricted \write18 enabled.
entering extended mode
(paper.tex
LaTeX2e <2025-11-01>
L3 programming layer <2025-12-29>
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/base\article.cls
Document Class: article 2025/01/22 v1.4n Standard LaTeX document class
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/base\size11.clo))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsmath.sty
For additional information on amsmath, use the `?' option.
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amstext.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsgen.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsbsy.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsmath\amsopn.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\amssymb.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\amsfonts.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amscls\amsthm.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\graphicx.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\keyval.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\graphics.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\trig.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics-cfg\graphics.c
fg)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics-def\pdftex.def
)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/algorithms\algorithm.st
y (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/float\float.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/base\ifthen.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/algorithms\algorithmic.
sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/frontendlayer\tikz.
sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/basiclayer\pgf.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/utilities\pgfrcs.st
y
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfutil
-common.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfutil
-latex.def)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfrcs.
code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf\pgf.revision.tex)
))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/basiclayer\pgfcore.
sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/systemlayer\pgfsys.
sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
s.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfkeys
.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfkeys
libraryfiltered.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgf.c
fg)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
s-pdftex.def
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
s-common-pdf.def)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
ssoftpath.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/systemlayer\pgfsy
sprotocol.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/xcolor\xcolor.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics-cfg\color.cfg)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/graphics\mathcolor.ltx)
)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
e.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmath.code
.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathutil.
code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathparse
r.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.basic.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.trigonometric.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.random.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.comparison.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.base.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.round.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.misc.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfunct
ions.integerarithmetics.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathcalc.
code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmathfloat
.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfint.code.
tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epoints.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epathconstruct.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epathusage.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
escopes.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
egraphicstate.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
etransformations.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
equick.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eobjects.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epathprocessing.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
earrows.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eshade.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eimage.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
eexternal.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
elayers.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
etransparency.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
epatterns.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/basiclayer\pgfcor
erdf.code.tex)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/modules\pgfmodule
shapes.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/modules\pgfmodule
plot.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/compatibility\pgfco
mp-version-0-65.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/compatibility\pgfco
mp-version-1-18.sty))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/utilities\pgffor.st
y
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/utilities\pgfkeys.s
ty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgfkeys
.code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/pgf/math\pgfmath.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/math\pgfmath.code
.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/utilities\pgffor.
code.tex))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/frontendlayer/tik
z\tikz.code.tex
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/libraries\pgflibr
aryplothandlers.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/modules\pgfmodule
matrix.code.tex)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pgf/frontendlayer/tik
z/libraries\tikzlibrarytopaths.code.tex)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\hyperref.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/iftex\iftex.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/kvsetkeys\kvsetkeys.sty
)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/kvdefinekeys\kvdefine
keys.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pdfescape\pdfescape.s
ty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/ltxcmds\ltxcmds.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/pdftexcmds\pdftexcmds
.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/infwarerr\infwarerr.s
ty)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hycolor\hycolor.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\nameref.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/refcount\refcount.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/gettitlestring\gettit
lestring.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/kvoptions\kvoptions.sty
)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/etoolbox\etoolbox.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/stringenc\stringenc.s
ty) (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\pd1enc.def
) (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/intcalc\intcalc.sty
) (C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\puenc.def)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/url\url.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/bitset\bitset.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/bigintcalc\bigintcalc
.sty)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/hyperref\hpdftex.def
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/rerunfilecheck\rerunfil
echeck.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/generic/uniquecounter\uniquec
ounter.sty)))
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\listings.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\lstpatch.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\lstmisc.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\listings.cfg))
==> First Aid for listings.sty no longer applied!
Expected:
2024/09/23 1.10c (Carsten Heinz)
but found:
2025/11/14 1.11b (Carsten Heinz)
so I'm assuming it got fixed.
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/cite\cite.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/booktabs\booktabs.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/tools\array.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/listings\lstlang1.sty)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/l3backend\l3backend-pdf
tex.def) (paper.aux)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/context/base/mkii\supp-pdf.mk
ii
[Loading MPS to PDF converter (version 2006.09.02).]
)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/epstopdf-pkg\epstopdf-b
ase.sty
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/00miktex\epstopdf-sys.c
fg)) (paper.out) (paper.out)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\umsa.fd)
(C:\Users\alexa\AppData\Local\Programs\MiKTeX\tex/latex/amsfonts\umsb.fd)
[1{C:/Users/alexa/AppData/Local/MiKTeX/fonts/map/pdftex/pdftex.map}] [2]
Overfull \hbox (21.74994pt too wide) in paragraph at lines 57--58
\OT1/cmr/m/n/10.95 quences, and sim-ple trans-for-ma-tions. When the prob-lem d
o-main shifts|different
Overfull \hbox (6.21317pt too wide) in paragraph at lines 59--60
[]\OT1/cmr/m/n/10.95 Consider the bond strength cal-cu-la-tion im-ple-mented in
\OT1/cmtt/m/n/10.95 bond.py:103-121\OT1/cmr/m/n/10.95 .
[3]
Overfull \hbox (194.18127pt too wide) in paragraph at lines 86--104
[][]
[4]
Overfull \hbox (0.80002pt too wide) in paragraph at lines 135--136
[]\OT1/cmr/m/n/10.95 Neuroscience and cog-ni-tive psy-chol-ogy in-creas-ingly e
m-pha-size the brain's
[5]
Overfull \hbox (86.21509pt too wide) in paragraph at lines 163--178
[][]
Overfull \hbox (31.84698pt too wide) in paragraph at lines 182--183
\OT1/cmr/m/n/10.95 man-tic func-tion in the net-work. These edge types, cre-ate
d in \OT1/cmtt/m/n/10.95 slipnet.py:200-236\OT1/cmr/m/n/10.95 ,
[6]
Overfull \hbox (0.76581pt too wide) in paragraph at lines 184--185
[]\OT1/cmr/bx/n/10.95 Category Links[] \OT1/cmr/m/n/10.95 form tax-o-nomic hi-
er-ar-chies, con-nect-ing spe-cific in-stances
[7]
Overfull \hbox (3.07117pt too wide) in paragraph at lines 216--217
[]\OT1/cmr/m/n/10.95 This for-mu-la-tion au-to-mat-i-cally as-signs ap-pro-pri-
ate depths. Let-ters them-
[8]
Overfull \hbox (0.92467pt too wide) in paragraph at lines 218--219
\OT1/cmr/m/n/10.95 con-cepts au-to-mat-i-cally as-signs them ap-pro-pri-ate dep
ths based on their graph
Overfull \hbox (55.18405pt too wide) detected at line 244
[][][][]\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 i \OMS/cmsy/m/n/10.95 ! \OML/cm
m/m/it/10.95 j\OT1/cmr/m/n/10.95 ) = []
[9]
Overfull \hbox (13.33466pt too wide) in paragraph at lines 268--269
\OT1/cmr/m/n/10.95 col-ors rep-re-sent-ing con-cep-tual depth and edge thick-ne
ss in-di-cat-ing link strength
[10] [11 <./figure1_slipnet_graph.pdf>] [12 <./figure2_activation_spreading.pdf
> <./figure3_resistance_distance.pdf>]
Overfull \hbox (4.56471pt too wide) in paragraph at lines 317--318
\OT1/cmr/m/n/10.95 We for-mal-ize the Workspace as a time-varying graph $\OMS/c
msy/m/n/10.95 W\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 t\OT1/cmr/m/n/10.95 ) =
(\OML/cmm/m/it/10.95 V[]\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 t\OT1/cmr/m/n/1
0.95 )\OML/cmm/m/it/10.95 ; E[]\OT1/cmr/m/n/10.95 (\OML/cmm/m/it/10.95 t\OT1/cm
r/m/n/10.95 )\OML/cmm/m/it/10.95 ; ^^[\OT1/cmr/m/n/10.95 )$
Overfull \hbox (35.00961pt too wide) in paragraph at lines 328--329
\OT1/cmr/m/n/10.95 nodes or edges to the graph. Struc-tures break (\OT1/cmtt/m/
n/10.95 bond.py:56-70\OT1/cmr/m/n/10.95 , \OT1/cmtt/m/n/10.95 group.py:143-165\
OT1/cmr/m/n/10.95 ,
Overfull \hbox (4.6354pt too wide) in paragraph at lines 332--333
\OT1/cmr/m/n/10.95 Current Copy-cat im-ple-men-ta-tion com-putes ob-ject salien
ce us-ing fixed weight-
Overfull \hbox (69.83707pt too wide) in paragraph at lines 332--333
\OT1/cmr/m/n/10.95 ing schemes that do not adapt to graph struc-ture. The code
in \OT1/cmtt/m/n/10.95 workspaceObject.py:88-95
Overfull \hbox (15.95015pt too wide) detected at line 337
[]
[13]
Overfull \hbox (2.65536pt too wide) in paragraph at lines 349--350
[]\OT1/cmr/m/n/10.95 In Copy-cat's Workspace, be-tween-ness cen-tral-ity nat-u-
rally iden-ti-fies struc-
[14] [15]
Underfull \hbox (badness 10000) in paragraph at lines 432--432
[]|\OT1/cmr/bx/n/10 Original Con-
Underfull \hbox (badness 2512) in paragraph at lines 432--432
[]|\OT1/cmr/bx/n/10 Graph Met-ric Re-place-
Overfull \hbox (10.22531pt too wide) in paragraph at lines 434--434
[]|\OT1/cmr/m/n/10 memberCompatibility
Underfull \hbox (badness 10000) in paragraph at lines 434--434
[]|\OT1/cmr/m/n/10 Structural equiv-a-lence:
Underfull \hbox (badness 10000) in paragraph at lines 435--435
[]|\OT1/cmr/m/n/10 facetFactor
Underfull \hbox (badness 10000) in paragraph at lines 436--436
[]|\OT1/cmr/m/n/10 supportFactor
Underfull \hbox (badness 10000) in paragraph at lines 436--436
[]|\OT1/cmr/m/n/10 Clustering co-ef-fi-cient:
Underfull \hbox (badness 10000) in paragraph at lines 437--437
[]|\OT1/cmr/m/n/10 jump[]threshold
Underfull \hbox (badness 10000) in paragraph at lines 438--438
[]|\OT1/cmr/m/n/10 salience[]weights
Underfull \hbox (badness 10000) in paragraph at lines 438--438
[]|\OT1/cmr/m/n/10 Betweenness cen-tral-ity:
Underfull \hbox (badness 10000) in paragraph at lines 439--439
[]|\OT1/cmr/m/n/10 length[]factors (5,
Underfull \hbox (badness 10000) in paragraph at lines 440--440
[]|\OT1/cmr/m/n/10 mapping[]factors
Overfull \hbox (88.56494pt too wide) in paragraph at lines 430--443
[][]
[16] [17]
Overfull \hbox (2.62796pt too wide) in paragraph at lines 533--534
\OT1/cmr/m/n/10.95 tently higher be-tween-ness than ob-jects that re-main un-ma
pped (dashed lines),
[18] [19 <./figure4_workspace_evolution.pdf> <./figure5_betweenness_dynamics.pd
f>] [20 <./figure6_clustering_distribution.pdf>]
Overfull \hbox (11.07368pt too wide) in paragraph at lines 578--579
\OT1/cmr/m/n/10.95 the brit-tle-ness of fixed pa-ram-e-ters. When the prob-lem
do-main changes|longer
[21]
Overfull \hbox (68.84294pt too wide) in paragraph at lines 592--605
[][]
[22]
Overfull \hbox (0.16418pt too wide) in paragraph at lines 623--624
\OT1/cmr/m/n/10.95 Specif-i-cally, we pre-dict that tem-per-a-ture in-versely c
or-re-lates with Workspace
Overfull \hbox (5.02307pt too wide) in paragraph at lines 626--627
[]\OT1/cmr/bx/n/10.95 Hypothesis 3: Clus-ter-ing Pre-dicts Suc-cess[] \OT1/cmr
/m/n/10.95 Suc-cess-ful problem-solving
[23] [24] [25] [26]
Overfull \hbox (0.89622pt too wide) in paragraph at lines 696--697
[]\OT1/cmr/bx/n/10.95 Neuroscience Com-par-i-son[] \OT1/cmr/m/n/10.95 Com-par-
ing Copy-cat's graph met-rics to brain
Overfull \hbox (7.0143pt too wide) in paragraph at lines 702--703
[]\OT1/cmr/bx/n/10.95 Meta-Learning Met-ric Se-lec-tion[] \OT1/cmr/m/n/10.95 D
e-vel-op-ing meta-learning sys-tems that
[27]
Overfull \hbox (33.3155pt too wide) in paragraph at lines 713--714
[]\OT1/cmr/m/n/10.95 The graph-theoretical re-for-mu-la-tion hon-ors Copy-cat's
orig-i-nal vi-sion|modeling
(paper.bbl [28]) [29] (paper.aux) )
(see the transcript file for additional information) <C:\Users\alexa\AppData\Lo
cal\MiKTeX\fonts/pk/ljfour/jknappen/ec/dpi600\tcrm1095.pk><C:/Users/alexa/AppDa
ta/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmbx10.pfb><C:/Users/al
exa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmbx12.pfb><C:
/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmcsc
10.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfont
s/cm/cmex10.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/publi
c/amsfonts/cm/cmmi10.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/ty
pe1/public/amsfonts/cm/cmmi5.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/
fonts/type1/public/amsfonts/cm/cmmi6.pfb><C:/Users/alexa/AppData/Local/Programs
/MiKTeX/fonts/type1/public/amsfonts/cm/cmmi7.pfb><C:/Users/alexa/AppData/Local/
Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmmi8.pfb><C:/Users/alexa/AppDat
a/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmr10.pfb><C:/Users/alex
a/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmr12.pfb><C:/Us
ers/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmr17.pf
b><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/
cmr5.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfo
nts/cm/cmr6.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/publi
c/amsfonts/cm/cmr7.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type
1/public/amsfonts/cm/cmr8.pfb><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fon
ts/type1/public/amsfonts/cm/cmr9.pfb><C:/Users/alexa/AppData/Local/Programs/MiK
TeX/fonts/type1/public/amsfonts/cm/cmsy10.pfb><C:/Users/alexa/AppData/Local/Pro
grams/MiKTeX/fonts/type1/public/amsfonts/cm/cmsy7.pfb><C:/Users/alexa/AppData/L
ocal/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmsy8.pfb><C:/Users/alexa/A
ppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmti10.pfb><C:/User
s/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/cm/cmtt10.pfb
><C:/Users/alexa/AppData/Local/Programs/MiKTeX/fonts/type1/public/amsfonts/symb
ols/msbm10.pfb>
Output written on paper.pdf (29 pages, 642536 bytes).
Transcript written on paper.log.
pdflatex: major issue: So far, you have not checked for MiKTeX updates.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 418 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 680 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 594 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 371 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 397 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 602 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 704 KiB

View File

@ -0,0 +1,88 @@
"""
Master script to generate all figures for the paper
Run this to create all PDF and PNG figures at once
"""
import subprocess
import sys
import os
# Change to LaTeX directory
script_dir = os.path.dirname(os.path.abspath(__file__))
os.chdir(script_dir)
scripts = [
'generate_slipnet_graph.py',
'compare_formulas.py',
'activation_spreading.py',
'resistance_distance.py',
'clustering_analysis.py',
'workspace_evolution.py',
]
print("="*70)
print("Generating all figures for the paper:")
print(" 'From Hardcoded Heuristics to Graph-Theoretical Constructs'")
print("="*70)
print()
failed_scripts = []
for i, script in enumerate(scripts, 1):
print(f"[{i}/{len(scripts)}] Running {script}...")
try:
result = subprocess.run([sys.executable, script],
capture_output=True,
text=True,
timeout=60)
if result.returncode == 0:
print(f" ✓ Success")
if result.stdout:
print(f" {result.stdout.strip()}")
else:
print(f" ✗ Failed with return code {result.returncode}")
if result.stderr:
print(f" Error: {result.stderr.strip()}")
failed_scripts.append(script)
except subprocess.TimeoutExpired:
print(f" ✗ Timeout (>60s)")
failed_scripts.append(script)
except Exception as e:
print(f" ✗ Exception: {e}")
failed_scripts.append(script)
print()
print("="*70)
print("Summary:")
print("="*70)
if not failed_scripts:
print("✓ All figures generated successfully!")
print()
print("Generated files:")
print(" - figure1_slipnet_graph.pdf/.png")
print(" - figure2_activation_spreading.pdf/.png")
print(" - figure3_resistance_distance.pdf/.png")
print(" - figure4_workspace_evolution.pdf/.png")
print(" - figure5_betweenness_dynamics.pdf/.png")
print(" - figure6_clustering_distribution.pdf/.png")
print(" - formula_comparison.pdf/.png")
print(" - scalability_comparison.pdf/.png")
print(" - slippability_temperature.pdf/.png")
print(" - external_strength_comparison.pdf/.png")
print()
print("You can now compile the LaTeX document with these figures.")
print("To include them in paper.tex, replace the placeholder \\fbox commands")
print("with \\includegraphics commands:")
print()
print(" \\includegraphics[width=0.8\\textwidth]{figure1_slipnet_graph.pdf}")
else:
print(f"{len(failed_scripts)} script(s) failed:")
for script in failed_scripts:
print(f" - {script}")
print()
print("Please check the error messages above and ensure you have")
print("the required packages installed:")
print(" pip install matplotlib numpy networkx scipy")
print("="*70)

View File

@ -0,0 +1,140 @@
"""
Generate Slipnet graph visualization (Figure 1)
Shows conceptual depth as node color gradient, with key Slipnet nodes and connections.
"""
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
# Define key Slipnet nodes with their conceptual depths
nodes = {
# Letters (depth 10)
'a': 10, 'b': 10, 'c': 10, 'd': 10, 'z': 10,
# Numbers (depth 30)
'1': 30, '2': 30, '3': 30,
# String positions (depth 40)
'leftmost': 40, 'rightmost': 40, 'middle': 40, 'single': 40,
# Directions (depth 40)
'left': 40, 'right': 40,
# Alphabetic positions (depth 60)
'first': 60, 'last': 60,
# Bond types (depth 50-80)
'predecessor': 50, 'successor': 50, 'sameness': 80,
# Group types (depth 50-80)
'predecessorGroup': 50, 'successorGroup': 50, 'samenessGroup': 80,
# Relations (depth 90)
'identity': 90, 'opposite': 90,
# Categories (depth 20-90)
'letterCategory': 30, 'stringPositionCategory': 70,
'directionCategory': 70, 'bondCategory': 80, 'length': 60,
}
# Define edges with their link lengths (inverse = strength)
edges = [
# Letter to letterCategory
('a', 'letterCategory', 97), ('b', 'letterCategory', 97),
('c', 'letterCategory', 97), ('d', 'letterCategory', 97),
('z', 'letterCategory', 97),
# Successor/predecessor relationships
('a', 'b', 50), ('b', 'c', 50), ('c', 'd', 50),
('b', 'a', 50), ('c', 'b', 50), ('d', 'c', 50),
# Bond types to bond category
('predecessor', 'bondCategory', 60), ('successor', 'bondCategory', 60),
('sameness', 'bondCategory', 30),
# Group types
('sameness', 'samenessGroup', 30),
('predecessor', 'predecessorGroup', 60),
('successor', 'successorGroup', 60),
# Opposite relations
('left', 'right', 80), ('right', 'left', 80),
('first', 'last', 80), ('last', 'first', 80),
# Position relationships
('left', 'directionCategory', 50), ('right', 'directionCategory', 50),
('leftmost', 'stringPositionCategory', 50),
('rightmost', 'stringPositionCategory', 50),
('middle', 'stringPositionCategory', 50),
# Slippable connections
('left', 'leftmost', 90), ('leftmost', 'left', 90),
('right', 'rightmost', 90), ('rightmost', 'right', 90),
('leftmost', 'first', 100), ('first', 'leftmost', 100),
('rightmost', 'last', 100), ('last', 'rightmost', 100),
# Abstract relations
('identity', 'bondCategory', 50),
('opposite', 'bondCategory', 80),
]
# Create graph
G = nx.DiGraph()
# Add nodes with depth attribute
for node, depth in nodes.items():
G.add_node(node, depth=depth)
# Add edges with link length
for source, target, length in edges:
G.add_edge(source, target, length=length, weight=100-length)
# Create figure
fig, ax = plt.subplots(figsize=(16, 12))
# Use hierarchical layout based on depth
pos = {}
depth_groups = {}
for node in G.nodes():
depth = G.nodes[node]['depth']
if depth not in depth_groups:
depth_groups[depth] = []
depth_groups[depth].append(node)
# Position nodes by depth (y-axis) and spread horizontally
for depth, node_list in depth_groups.items():
y = 1.0 - (depth / 100.0) # Invert so shallow nodes at top
for i, node in enumerate(node_list):
x = (i - len(node_list)/2) / max(len(node_list), 10) * 2.5
pos[node] = (x, y)
# Get node colors based on depth (blue=shallow/concrete, red=deep/abstract)
node_colors = [G.nodes[node]['depth'] for node in G.nodes()]
# Draw edges with thickness based on strength (inverse of link length)
edges_to_draw = G.edges()
edge_widths = [0.3 + (100 - G[u][v]['length']) / 100.0 * 3 for u, v in edges_to_draw]
nx.draw_networkx_edges(G, pos, edgelist=edges_to_draw, width=edge_widths,
alpha=0.3, arrows=True, arrowsize=10,
connectionstyle='arc3,rad=0.1', ax=ax)
# Draw nodes
nx.draw_networkx_nodes(G, pos, node_color=node_colors,
node_size=800, cmap='coolwarm',
vmin=0, vmax=100, ax=ax)
# Draw labels
nx.draw_networkx_labels(G, pos, font_size=8, font_weight='bold', ax=ax)
# Add colorbar
sm = plt.cm.ScalarMappable(cmap='coolwarm',
norm=plt.Normalize(vmin=0, vmax=100))
sm.set_array([])
cbar = plt.colorbar(sm, ax=ax, fraction=0.046, pad=0.04)
cbar.set_label('Conceptual Depth', rotation=270, labelpad=20, fontsize=12)
ax.set_title('Slipnet Graph Structure\n' +
'Color gradient: Blue (concrete/shallow) → Red (abstract/deep)\n' +
'Edge thickness: Link strength (inverse of link length)',
fontsize=14, fontweight='bold', pad=20)
ax.axis('off')
plt.tight_layout()
plt.savefig('figure1_slipnet_graph.pdf', dpi=300, bbox_inches='tight')
plt.savefig('figure1_slipnet_graph.png', dpi=300, bbox_inches='tight')
print("Generated figure1_slipnet_graph.pdf and .png")
plt.close()

115
LaTeX/paper.aux Normal file
View File

@ -0,0 +1,115 @@
\relax
\providecommand\hyper@newdestlabel[2]{}
\providecommand\HyField@AuxAddToFields[1]{}
\providecommand\HyField@AuxAddToCoFields[2]{}
\citation{mitchell1993analogy,hofstadter1995fluid}
\@writefile{toc}{\contentsline {section}{\numberline {1}Introduction}{1}{section.1}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\numberline {2}The Problem with Hardcoded Constants}{3}{section.2}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {2.1}Brittleness and Domain Specificity}{3}{subsection.2.1}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {2.2}Catalog of Hardcoded Constants}{4}{subsection.2.2}\protected@file@percent }
\@writefile{lot}{\contentsline {table}{\numberline {1}{\ignorespaces Major hardcoded constants in Copycat implementation. Values are empirically determined rather than derived from principles.}}{4}{table.1}\protected@file@percent }
\newlabel{tab:constants}{{1}{4}{Major hardcoded constants in Copycat implementation. Values are empirically determined rather than derived from principles}{table.1}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {2.3}Lack of Principled Justification}{4}{subsection.2.3}\protected@file@percent }
\citation{watts1998collective}
\@writefile{toc}{\contentsline {subsection}{\numberline {2.4}Scalability Limitations}{5}{subsection.2.4}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {2.5}Cognitive Implausibility}{5}{subsection.2.5}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {2.6}The Case for Graph-Theoretical Reformulation}{6}{subsection.2.6}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\numberline {3}The Slipnet and its Graph Operations}{6}{section.3}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {3.1}Slipnet as a Semantic Network}{6}{subsection.3.1}\protected@file@percent }
\@writefile{lot}{\contentsline {table}{\numberline {2}{\ignorespaces Slipnet node types with conceptual depths, counts, and average connectivity. Letter nodes are most concrete (depth 10), while abstract relations have depth 90.}}{7}{table.2}\protected@file@percent }
\newlabel{tab:slipnodes}{{2}{7}{Slipnet node types with conceptual depths, counts, and average connectivity. Letter nodes are most concrete (depth 10), while abstract relations have depth 90}{table.2}{}}
\@writefile{toc}{\contentsline {paragraph}{Category Links}{7}{section*.1}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Instance Links}{7}{section*.2}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Property Links}{7}{section*.3}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Lateral Slip Links}{7}{section*.4}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Lateral Non-Slip Links}{8}{section*.5}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {3.2}Conceptual Depth as Minimum Distance to Low-Level Nodes}{8}{subsection.3.2}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {3.3}Slippage via Dynamic Weight Adjustment}{9}{subsection.3.3}\protected@file@percent }
\citation{klein1993resistance}
\@writefile{toc}{\contentsline {subsection}{\numberline {3.4}Graph Visualization and Metrics}{10}{subsection.3.4}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {1}{\ignorespaces Slipnet graph structure with conceptual depth encoded as node color intensity and link strength as edge thickness.}}{11}{figure.1}\protected@file@percent }
\newlabel{fig:slipnet}{{1}{11}{Slipnet graph structure with conceptual depth encoded as node color intensity and link strength as edge thickness}{figure.1}{}}
\@writefile{toc}{\contentsline {section}{\numberline {4}The Workspace as a Dynamic Graph}{11}{section.4}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {2}{\ignorespaces Activation spreading over time demonstrates differential decay: shallow nodes (letters) lose activation rapidly while deep nodes (abstract concepts) persist.}}{12}{figure.2}\protected@file@percent }
\newlabel{fig:activation_spread}{{2}{12}{Activation spreading over time demonstrates differential decay: shallow nodes (letters) lose activation rapidly while deep nodes (abstract concepts) persist}{figure.2}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces Resistance distance heat map reveals multi-path connectivity: concepts connected by multiple routes show lower resistance than single-path connections.}}{12}{figure.3}\protected@file@percent }
\newlabel{fig:resistance_distance}{{3}{12}{Resistance distance heat map reveals multi-path connectivity: concepts connected by multiple routes show lower resistance than single-path connections}{figure.3}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {4.1}Workspace Graph Structure}{13}{subsection.4.1}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {4.2}Graph Betweenness for Structural Importance}{13}{subsection.4.2}\protected@file@percent }
\citation{freeman1977set,brandes2001faster}
\citation{brandes2001faster}
\citation{watts1998collective}
\@writefile{toc}{\contentsline {subsection}{\numberline {4.3}Local Graph Density and Clustering Coefficients}{15}{subsection.4.3}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {4.4}Complete Substitution Table}{16}{subsection.4.4}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {4.5}Algorithmic Implementations}{16}{subsection.4.5}\protected@file@percent }
\@writefile{lot}{\contentsline {table}{\numberline {3}{\ignorespaces Proposed graph-theoretical replacements for hardcoded constants. Each metric provides principled, adaptive measurement based on graph structure.}}{17}{table.3}\protected@file@percent }
\newlabel{tab:substitutions}{{3}{17}{Proposed graph-theoretical replacements for hardcoded constants. Each metric provides principled, adaptive measurement based on graph structure}{table.3}{}}
\@writefile{loa}{\contentsline {algorithm}{\numberline {1}{\ignorespaces Graph-Based Bond External Strength}}{17}{algorithm.1}\protected@file@percent }
\newlabel{alg:bond_strength}{{1}{17}{Algorithmic Implementations}{algorithm.1}{}}
\@writefile{loa}{\contentsline {algorithm}{\numberline {2}{\ignorespaces Betweenness-Based Salience}}{18}{algorithm.2}\protected@file@percent }
\newlabel{alg:betweenness_salience}{{2}{18}{Algorithmic Implementations}{algorithm.2}{}}
\@writefile{loa}{\contentsline {algorithm}{\numberline {3}{\ignorespaces Adaptive Activation Threshold}}{18}{algorithm.3}\protected@file@percent }
\newlabel{alg:adaptive_threshold}{{3}{18}{Algorithmic Implementations}{algorithm.3}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {4.6}Workspace Evolution Visualization}{18}{subsection.4.6}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Workspace graph evolution during analogical reasoning shows progressive structure formation, with betweenness centrality values identifying strategically important objects.}}{19}{figure.4}\protected@file@percent }
\newlabel{fig:workspace_evolution}{{4}{19}{Workspace graph evolution during analogical reasoning shows progressive structure formation, with betweenness centrality values identifying strategically important objects}{figure.4}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces Betweenness centrality dynamics reveal that objects with sustained high centrality are preferentially selected for correspondences.}}{19}{figure.5}\protected@file@percent }
\newlabel{fig:betweenness_dynamics}{{5}{19}{Betweenness centrality dynamics reveal that objects with sustained high centrality are preferentially selected for correspondences}{figure.5}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {6}{\ignorespaces Successful analogy-making runs show higher clustering coefficients, indicating that locally dense structure promotes coherent solutions.}}{20}{figure.6}\protected@file@percent }
\newlabel{fig:clustering_distribution}{{6}{20}{Successful analogy-making runs show higher clustering coefficients, indicating that locally dense structure promotes coherent solutions}{figure.6}{}}
\@writefile{toc}{\contentsline {section}{\numberline {5}Discussion}{20}{section.5}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {5.1}Theoretical Advantages}{20}{subsection.5.1}\protected@file@percent }
\citation{watts1998collective}
\@writefile{toc}{\contentsline {subsection}{\numberline {5.2}Adaptability and Scalability}{21}{subsection.5.2}\protected@file@percent }
\citation{brandes2001faster}
\@writefile{toc}{\contentsline {subsection}{\numberline {5.3}Computational Considerations}{22}{subsection.5.3}\protected@file@percent }
\@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces Computational complexity of graph metrics and mitigation strategies. Here $n$ = nodes, $m$ = edges, $d$ = degree, $m_{sub}$ = edges in subgraph.}}{22}{table.4}\protected@file@percent }
\newlabel{tab:complexity}{{4}{22}{Computational complexity of graph metrics and mitigation strategies. Here $n$ = nodes, $m$ = edges, $d$ = degree, $m_{sub}$ = edges in subgraph}{table.4}{}}
\citation{newman2018networks}
\@writefile{toc}{\contentsline {subsection}{\numberline {5.4}Empirical Predictions and Testable Hypotheses}{23}{subsection.5.4}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Hypothesis 1: Improved Performance Consistency}{23}{section*.6}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Hypothesis 2: Temperature-Graph Entropy Correlation}{23}{section*.7}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Hypothesis 3: Clustering Predicts Success}{23}{section*.8}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Hypothesis 4: Betweenness Predicts Correspondence Selection}{23}{section*.9}\protected@file@percent }
\citation{gentner1983structure}
\citation{scarselli2008graph}
\citation{gardenfors2000conceptual}
\citation{watts1998collective}
\@writefile{toc}{\contentsline {paragraph}{Hypothesis 5: Graceful Degradation}{24}{section*.10}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {5.5}Connections to Related Work}{24}{subsection.5.5}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Analogical Reasoning}{24}{section*.11}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Graph Neural Networks}{24}{section*.12}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Conceptual Spaces}{24}{section*.13}\protected@file@percent }
\citation{newman2018networks}
\@writefile{toc}{\contentsline {paragraph}{Small-World Networks}{25}{section*.14}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Network Science in Cognition}{25}{section*.15}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {5.6}Limitations and Open Questions}{25}{subsection.5.6}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Parameter Selection}{25}{section*.16}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Multi-Relational Graphs}{25}{section*.17}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Temporal Dynamics}{25}{section*.18}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Learning and Meta-Learning}{26}{section*.19}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {5.7}Broader Implications}{26}{subsection.5.7}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\numberline {6}Conclusion}{26}{section.6}\protected@file@percent }
\citation{forbus2017companion}
\@writefile{toc}{\contentsline {subsection}{\numberline {6.1}Future Work}{27}{subsection.6.1}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Implementation and Validation}{27}{section*.20}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Domain Transfer}{27}{section*.21}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Neuroscience Comparison}{27}{section*.22}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Hybrid Neural-Symbolic Systems}{27}{section*.23}\protected@file@percent }
\@writefile{toc}{\contentsline {paragraph}{Meta-Learning Metric Selection}{27}{section*.24}\protected@file@percent }
\bibstyle{plain}
\bibdata{references}
\bibcite{brandes2001faster}{1}
\bibcite{forbus2017companion}{2}
\bibcite{freeman1977set}{3}
\bibcite{gardenfors2000conceptual}{4}
\@writefile{toc}{\contentsline {paragraph}{Extension to Other Cognitive Architectures}{28}{section*.25}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {6.2}Closing Perspective}{28}{subsection.6.2}\protected@file@percent }
\bibcite{gentner1983structure}{5}
\bibcite{hofstadter1995fluid}{6}
\bibcite{klein1993resistance}{7}
\bibcite{mitchell1993analogy}{8}
\bibcite{newman2018networks}{9}
\bibcite{scarselli2008graph}{10}
\bibcite{watts1998collective}{11}
\gdef \@abspage@last{29}

60
LaTeX/paper.bbl Normal file
View File

@ -0,0 +1,60 @@
\begin{thebibliography}{10}
\bibitem{brandes2001faster}
Ulrik Brandes.
\newblock A faster algorithm for betweenness centrality.
\newblock {\em Journal of Mathematical Sociology}, 25(2):163--177, 2001.
\bibitem{forbus2017companion}
Kenneth~D. Forbus and Thomas~R. Hinrichs.
\newblock Companion cognitive systems: A step toward human-level ai.
\newblock {\em AI Magazine}, 38(4):25--35, 2017.
\bibitem{freeman1977set}
Linton~C. Freeman.
\newblock A set of measures of centrality based on betweenness.
\newblock {\em Sociometry}, 40(1):35--41, 1977.
\bibitem{gardenfors2000conceptual}
Peter G\"{a}rdenfors.
\newblock {\em Conceptual Spaces: The Geometry of Thought}.
\newblock MIT Press, Cambridge, MA, 2000.
\bibitem{gentner1983structure}
Dedre Gentner.
\newblock Structure-mapping: A theoretical framework for analogy.
\newblock {\em Cognitive Science}, 7(2):155--170, 1983.
\bibitem{hofstadter1995fluid}
Douglas~R. Hofstadter and FARG.
\newblock {\em Fluid Concepts and Creative Analogies: Computer Models of the
Fundamental Mechanisms of Thought}.
\newblock Basic Books, New York, NY, 1995.
\bibitem{klein1993resistance}
Douglas~J. Klein and Milan Randi\'{c}.
\newblock Resistance distance.
\newblock {\em Journal of Mathematical Chemistry}, 12(1):81--95, 1993.
\bibitem{mitchell1993analogy}
Melanie Mitchell.
\newblock {\em Analogy-Making as Perception: A Computer Model}.
\newblock MIT Press, Cambridge, MA, 1993.
\bibitem{newman2018networks}
Mark E.~J. Newman.
\newblock {\em Networks}.
\newblock Oxford University Press, Oxford, UK, 2nd edition, 2018.
\bibitem{scarselli2008graph}
Franco Scarselli, Marco Gori, Ah~Chung Tsoi, Markus Hagenbuchner, and Gabriele
Monfardini.
\newblock The graph neural network model.
\newblock {\em IEEE Transactions on Neural Networks}, 20(1):61--80, 2008.
\bibitem{watts1998collective}
Duncan~J. Watts and Steven~H. Strogatz.
\newblock Collective dynamics of 'small-world' networks.
\newblock {\em Nature}, 393(6684):440--442, 1998.
\end{thebibliography}

48
LaTeX/paper.blg Normal file
View File

@ -0,0 +1,48 @@
This is BibTeX, Version 0.99e
Capacity: max_strings=200000, hash_size=200000, hash_prime=170003
The top-level auxiliary file: paper.aux
Reallocating 'name_of_file' (item size: 1) to 6 items.
The style file: plain.bst
Reallocating 'name_of_file' (item size: 1) to 11 items.
Database file #1: references.bib
You've used 11 entries,
2118 wiz_defined-function locations,
576 strings with 5462 characters,
and the built_in function-call counts, 3192 in all, are:
= -- 319
> -- 122
< -- 0
+ -- 52
- -- 38
* -- 219
:= -- 551
add.period$ -- 33
call.type$ -- 11
change.case$ -- 49
chr.to.int$ -- 0
cite$ -- 11
duplicate$ -- 125
empty$ -- 270
format.name$ -- 38
if$ -- 652
int.to.chr$ -- 0
int.to.str$ -- 11
missing$ -- 15
newline$ -- 58
num.names$ -- 22
pop$ -- 49
preamble$ -- 1
purify$ -- 41
quote$ -- 0
skip$ -- 76
stack$ -- 0
substring$ -- 209
swap$ -- 11
text.length$ -- 0
text.prefix$ -- 0
top$ -- 0
type$ -- 36
warning$ -- 0
while$ -- 36
width$ -- 13
write$ -- 124

1072
LaTeX/paper.log Normal file

File diff suppressed because it is too large Load Diff

31
LaTeX/paper.out Normal file
View File

@ -0,0 +1,31 @@
\BOOKMARK [1][-]{section.1}{\376\377\000I\000n\000t\000r\000o\000d\000u\000c\000t\000i\000o\000n}{}% 1
\BOOKMARK [1][-]{section.2}{\376\377\000T\000h\000e\000\040\000P\000r\000o\000b\000l\000e\000m\000\040\000w\000i\000t\000h\000\040\000H\000a\000r\000d\000c\000o\000d\000e\000d\000\040\000C\000o\000n\000s\000t\000a\000n\000t\000s}{}% 2
\BOOKMARK [2][-]{subsection.2.1}{\376\377\000B\000r\000i\000t\000t\000l\000e\000n\000e\000s\000s\000\040\000a\000n\000d\000\040\000D\000o\000m\000a\000i\000n\000\040\000S\000p\000e\000c\000i\000f\000i\000c\000i\000t\000y}{section.2}% 3
\BOOKMARK [2][-]{subsection.2.2}{\376\377\000C\000a\000t\000a\000l\000o\000g\000\040\000o\000f\000\040\000H\000a\000r\000d\000c\000o\000d\000e\000d\000\040\000C\000o\000n\000s\000t\000a\000n\000t\000s}{section.2}% 4
\BOOKMARK [2][-]{subsection.2.3}{\376\377\000L\000a\000c\000k\000\040\000o\000f\000\040\000P\000r\000i\000n\000c\000i\000p\000l\000e\000d\000\040\000J\000u\000s\000t\000i\000f\000i\000c\000a\000t\000i\000o\000n}{section.2}% 5
\BOOKMARK [2][-]{subsection.2.4}{\376\377\000S\000c\000a\000l\000a\000b\000i\000l\000i\000t\000y\000\040\000L\000i\000m\000i\000t\000a\000t\000i\000o\000n\000s}{section.2}% 6
\BOOKMARK [2][-]{subsection.2.5}{\376\377\000C\000o\000g\000n\000i\000t\000i\000v\000e\000\040\000I\000m\000p\000l\000a\000u\000s\000i\000b\000i\000l\000i\000t\000y}{section.2}% 7
\BOOKMARK [2][-]{subsection.2.6}{\376\377\000T\000h\000e\000\040\000C\000a\000s\000e\000\040\000f\000o\000r\000\040\000G\000r\000a\000p\000h\000-\000T\000h\000e\000o\000r\000e\000t\000i\000c\000a\000l\000\040\000R\000e\000f\000o\000r\000m\000u\000l\000a\000t\000i\000o\000n}{section.2}% 8
\BOOKMARK [1][-]{section.3}{\376\377\000T\000h\000e\000\040\000S\000l\000i\000p\000n\000e\000t\000\040\000a\000n\000d\000\040\000i\000t\000s\000\040\000G\000r\000a\000p\000h\000\040\000O\000p\000e\000r\000a\000t\000i\000o\000n\000s}{}% 9
\BOOKMARK [2][-]{subsection.3.1}{\376\377\000S\000l\000i\000p\000n\000e\000t\000\040\000a\000s\000\040\000a\000\040\000S\000e\000m\000a\000n\000t\000i\000c\000\040\000N\000e\000t\000w\000o\000r\000k}{section.3}% 10
\BOOKMARK [2][-]{subsection.3.2}{\376\377\000C\000o\000n\000c\000e\000p\000t\000u\000a\000l\000\040\000D\000e\000p\000t\000h\000\040\000a\000s\000\040\000M\000i\000n\000i\000m\000u\000m\000\040\000D\000i\000s\000t\000a\000n\000c\000e\000\040\000t\000o\000\040\000L\000o\000w\000-\000L\000e\000v\000e\000l\000\040\000N\000o\000d\000e\000s}{section.3}% 11
\BOOKMARK [2][-]{subsection.3.3}{\376\377\000S\000l\000i\000p\000p\000a\000g\000e\000\040\000v\000i\000a\000\040\000D\000y\000n\000a\000m\000i\000c\000\040\000W\000e\000i\000g\000h\000t\000\040\000A\000d\000j\000u\000s\000t\000m\000e\000n\000t}{section.3}% 12
\BOOKMARK [2][-]{subsection.3.4}{\376\377\000G\000r\000a\000p\000h\000\040\000V\000i\000s\000u\000a\000l\000i\000z\000a\000t\000i\000o\000n\000\040\000a\000n\000d\000\040\000M\000e\000t\000r\000i\000c\000s}{section.3}% 13
\BOOKMARK [1][-]{section.4}{\376\377\000T\000h\000e\000\040\000W\000o\000r\000k\000s\000p\000a\000c\000e\000\040\000a\000s\000\040\000a\000\040\000D\000y\000n\000a\000m\000i\000c\000\040\000G\000r\000a\000p\000h}{}% 14
\BOOKMARK [2][-]{subsection.4.1}{\376\377\000W\000o\000r\000k\000s\000p\000a\000c\000e\000\040\000G\000r\000a\000p\000h\000\040\000S\000t\000r\000u\000c\000t\000u\000r\000e}{section.4}% 15
\BOOKMARK [2][-]{subsection.4.2}{\376\377\000G\000r\000a\000p\000h\000\040\000B\000e\000t\000w\000e\000e\000n\000n\000e\000s\000s\000\040\000f\000o\000r\000\040\000S\000t\000r\000u\000c\000t\000u\000r\000a\000l\000\040\000I\000m\000p\000o\000r\000t\000a\000n\000c\000e}{section.4}% 16
\BOOKMARK [2][-]{subsection.4.3}{\376\377\000L\000o\000c\000a\000l\000\040\000G\000r\000a\000p\000h\000\040\000D\000e\000n\000s\000i\000t\000y\000\040\000a\000n\000d\000\040\000C\000l\000u\000s\000t\000e\000r\000i\000n\000g\000\040\000C\000o\000e\000f\000f\000i\000c\000i\000e\000n\000t\000s}{section.4}% 17
\BOOKMARK [2][-]{subsection.4.4}{\376\377\000C\000o\000m\000p\000l\000e\000t\000e\000\040\000S\000u\000b\000s\000t\000i\000t\000u\000t\000i\000o\000n\000\040\000T\000a\000b\000l\000e}{section.4}% 18
\BOOKMARK [2][-]{subsection.4.5}{\376\377\000A\000l\000g\000o\000r\000i\000t\000h\000m\000i\000c\000\040\000I\000m\000p\000l\000e\000m\000e\000n\000t\000a\000t\000i\000o\000n\000s}{section.4}% 19
\BOOKMARK [2][-]{subsection.4.6}{\376\377\000W\000o\000r\000k\000s\000p\000a\000c\000e\000\040\000E\000v\000o\000l\000u\000t\000i\000o\000n\000\040\000V\000i\000s\000u\000a\000l\000i\000z\000a\000t\000i\000o\000n}{section.4}% 20
\BOOKMARK [1][-]{section.5}{\376\377\000D\000i\000s\000c\000u\000s\000s\000i\000o\000n}{}% 21
\BOOKMARK [2][-]{subsection.5.1}{\376\377\000T\000h\000e\000o\000r\000e\000t\000i\000c\000a\000l\000\040\000A\000d\000v\000a\000n\000t\000a\000g\000e\000s}{section.5}% 22
\BOOKMARK [2][-]{subsection.5.2}{\376\377\000A\000d\000a\000p\000t\000a\000b\000i\000l\000i\000t\000y\000\040\000a\000n\000d\000\040\000S\000c\000a\000l\000a\000b\000i\000l\000i\000t\000y}{section.5}% 23
\BOOKMARK [2][-]{subsection.5.3}{\376\377\000C\000o\000m\000p\000u\000t\000a\000t\000i\000o\000n\000a\000l\000\040\000C\000o\000n\000s\000i\000d\000e\000r\000a\000t\000i\000o\000n\000s}{section.5}% 24
\BOOKMARK [2][-]{subsection.5.4}{\376\377\000E\000m\000p\000i\000r\000i\000c\000a\000l\000\040\000P\000r\000e\000d\000i\000c\000t\000i\000o\000n\000s\000\040\000a\000n\000d\000\040\000T\000e\000s\000t\000a\000b\000l\000e\000\040\000H\000y\000p\000o\000t\000h\000e\000s\000e\000s}{section.5}% 25
\BOOKMARK [2][-]{subsection.5.5}{\376\377\000C\000o\000n\000n\000e\000c\000t\000i\000o\000n\000s\000\040\000t\000o\000\040\000R\000e\000l\000a\000t\000e\000d\000\040\000W\000o\000r\000k}{section.5}% 26
\BOOKMARK [2][-]{subsection.5.6}{\376\377\000L\000i\000m\000i\000t\000a\000t\000i\000o\000n\000s\000\040\000a\000n\000d\000\040\000O\000p\000e\000n\000\040\000Q\000u\000e\000s\000t\000i\000o\000n\000s}{section.5}% 27
\BOOKMARK [2][-]{subsection.5.7}{\376\377\000B\000r\000o\000a\000d\000e\000r\000\040\000I\000m\000p\000l\000i\000c\000a\000t\000i\000o\000n\000s}{section.5}% 28
\BOOKMARK [1][-]{section.6}{\376\377\000C\000o\000n\000c\000l\000u\000s\000i\000o\000n}{}% 29
\BOOKMARK [2][-]{subsection.6.1}{\376\377\000F\000u\000t\000u\000r\000e\000\040\000W\000o\000r\000k}{section.6}% 30
\BOOKMARK [2][-]{subsection.6.2}{\376\377\000C\000l\000o\000s\000i\000n\000g\000\040\000P\000e\000r\000s\000p\000e\000c\000t\000i\000v\000e}{section.6}% 31

BIN
LaTeX/paper.pdf Normal file

Binary file not shown.

718
LaTeX/paper.tex Normal file
View File

@ -0,0 +1,718 @@
\documentclass[11pt,a4paper]{article}
\usepackage{amsmath, amssymb, amsthm}
\usepackage{graphicx}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{tikz}
% Note: graphdrawing library requires LuaLaTeX, omitted for pdflatex compatibility
\usepackage{hyperref}
\usepackage{listings}
\usepackage{cite}
\usepackage{booktabs}
\usepackage{array}
\lstset{
basicstyle=\ttfamily\small,
breaklines=true,
frame=single,
numbers=left,
numberstyle=\tiny,
language=Python
}
\title{From Hardcoded Heuristics to Graph-Theoretical Constructs: \\
A Principled Reformulation of the Copycat Architecture}
\author{Alex Linhares}
\date{\today}
\begin{document}
\maketitle
\begin{abstract}
The Copycat architecture, developed by Mitchell and Hofstadter as a computational model of analogy-making, relies on numerous hardcoded constants and empirically-tuned formulas to regulate its behavior. While these parameters enable the system to exhibit fluid, human-like performance on letter-string analogy problems, they also introduce brittleness, lack theoretical justification, and limit the system's adaptability to new domains. This paper proposes a principled reformulation of Copycat's core mechanisms using graph-theoretical constructs. We demonstrate that many of the system's hardcoded constants—including bond strength factors, salience weights, and activation thresholds—can be replaced with well-studied graph metrics such as betweenness centrality, clustering coefficients, and resistance distance. This reformulation provides three key advantages: theoretical grounding in established mathematical frameworks, automatic adaptation to problem structure without manual tuning, and increased interpretability of the system's behavior. We present concrete proposals for substituting specific constants with graph metrics, analyze the computational implications, and discuss how this approach bridges classical symbolic AI with modern graph-based machine learning.
\end{abstract}
\section{Introduction}
Analogy-making stands as one of the most fundamental cognitive abilities, enabling humans to transfer knowledge across domains, recognize patterns in novel situations, and generate creative insights. Hofstadter and Mitchell's Copycat system~\cite{mitchell1993analogy,hofstadter1995fluid} represents a landmark achievement in modeling this capacity computationally. Given a simple analogy problem such as ``if abc changes to abd, what does ppqqrr change to?,'' Copycat constructs representations, explores alternatives, and produces answers that exhibit remarkable similarity to human response distributions. The system's architecture combines a permanent semantic network (the Slipnet) with a dynamic working memory (the Workspace), coordinated through stochastic codelets and regulated by a global temperature parameter.
Despite its cognitive plausibility and empirical success, Copycat's implementation embodies a fundamental tension. The system aspires to model fluid, adaptive cognition, yet its behavior is governed by numerous hardcoded constants and ad-hoc formulas. Bond strength calculations employ fixed compatibility factors of 0.7 and 1.0, external support decays according to $0.6^{1/n^3}$, and salience weights rigidly partition importance between intra-string (0.8) and inter-string (0.2) contexts. These parameters were carefully tuned through experimentation to produce human-like behavior on the canonical problem set, but they lack principled derivation from first principles.
This paper argues that many of Copycat's hardcoded constants can be naturally replaced with graph-theoretical constructs. We observe that both the Slipnet and Workspace are fundamentally graphs: the Slipnet is a semantic network with concepts as nodes and relationships as edges, while the Workspace contains objects as nodes connected by bonds and correspondences. Rather than imposing fixed numerical parameters on these graphs, we can leverage their inherent structure through well-studied metrics from graph theory. Betweenness centrality provides a principled measure of structural importance, clustering coefficients quantify local density, resistance distance captures conceptual proximity, and percolation thresholds offer dynamic activation criteria.
Formally, we can represent Copycat as a tuple $\mathcal{C} = (\mathcal{S}, \mathcal{W}, \mathcal{R}, T)$ where $\mathcal{S}$ denotes the Slipnet (semantic network), $\mathcal{W}$ represents the Workspace (problem representation), $\mathcal{R}$ is the Coderack (action scheduling system), and $T$ captures the global temperature (exploration-exploitation balance). This paper focuses on reformulating $\mathcal{S}$ and $\mathcal{W}$ as graphs with principled metrics, demonstrating how graph-theoretical constructs can replace hardcoded parameters while maintaining or improving the system's cognitive fidelity.
The benefits of this reformulation extend beyond theoretical elegance. Graph metrics automatically adapt to problem structure—betweenness centrality adjusts to actual topological configuration rather than assuming fixed importance weights. The approach provides natural interpretability through visualization and standard metrics. Computational graph theory offers efficient algorithms with known complexity bounds. Furthermore, this reformulation bridges Copycat's symbolic architecture with modern graph neural networks, opening pathways for hybrid approaches that combine classical AI's interpretability with contemporary machine learning's adaptability.
The remainder of this paper proceeds as follows. Section 2 catalogs Copycat's hardcoded constants and analyzes their limitations. Section 3 examines the Slipnet's graph structure and proposes distance-based reformulations of conceptual depth and slippage. Section 4 analyzes the Workspace as a dynamic graph and demonstrates how betweenness centrality and clustering coefficients can replace salience weights and support factors. Section 5 discusses theoretical advantages, computational considerations, and empirical predictions. Section 6 concludes with future directions and broader implications for cognitive architecture design.
\section{The Problem with Hardcoded Constants}
The Copycat codebase contains numerous numerical constants and formulas that regulate system behavior. While these parameters enable Copycat to produce human-like analogies, they introduce four fundamental problems: brittleness, lack of justification, poor scalability, and cognitive implausibility.
\subsection{Brittleness and Domain Specificity}
Copycat's constants were empirically tuned for letter-string analogy problems with specific characteristics: strings of 2-6 characters, alphabetic sequences, and simple transformations. When the problem domain shifts—different alphabet sizes, numerical domains, or visual analogies—these constants may no longer produce appropriate behavior. The system cannot adapt its parameters based on problem structure; it applies the same fixed values regardless of context. This brittleness limits Copycat's utility as a general model of analogical reasoning.
Consider the bond strength calculation implemented in \texttt{bond.py:103-121}. The internal strength of a bond combines three factors: member compatibility (whether bonded objects are the same type), facet factor (whether the bond involves letter categories), and the bond category's degree of association. The member compatibility uses a simple binary choice:
\begin{lstlisting}
if sourceGap == destinationGap:
memberCompatibility = 1.0
else:
memberCompatibility = 0.7
\end{lstlisting}
Why 0.7 for mixed-type bonds rather than 0.65 or 0.75? The choice appears arbitrary, determined through trial and error rather than derived from principles. Similarly, the facet factor applies another binary distinction:
\begin{lstlisting}
if self.facet == slipnet.letterCategory:
facetFactor = 1.0
else:
facetFactor = 0.7
\end{lstlisting}
Again, the value 0.7 recurs without justification. This pattern pervades the codebase, as documented in Table~\ref{tab:constants}.
\subsection{Catalog of Hardcoded Constants}
Table~\ref{tab:constants} presents a comprehensive catalog of the major hardcoded constants found in Copycat's implementation, including their locations, values, purposes, and current formulations.
\begin{table}[htbp]
\centering
\small
\begin{tabular}{llllp{5cm}}
\toprule
\textbf{Constant} & \textbf{Location} & \textbf{Value} & \textbf{Purpose} & \textbf{Current Formula} \\
\midrule
memberCompatibility & bond.py:111 & 0.7/1.0 & Type compatibility & Discrete choice \\
facetFactor & bond.py:115 & 0.7/1.0 & Letter vs other facets & Discrete choice \\
supportFactor & bond.py:129 & $0.6^{1/n^3}$ & Support dampening & Power law \\
jump\_threshold & slipnode.py:131 & 55.0 & Activation cutoff & Fixed threshold \\
shrunkLinkLength & slipnode.py:15 & $0.4 \times \text{length}$ & Activated links & Linear scaling \\
activation\_decay & slipnode.py:118 & $a \times \frac{100-d}{100}$ & Energy dissipation & Linear depth \\
jump\_probability & slipnode.py:133 & $(a/100)^3$ & Stochastic boost & Cubic power \\
salience\_weights & workspaceObject.py:89 & (0.2, 0.8) & Intra-string importance & Fixed ratio \\
salience\_weights & workspaceObject.py:92 & (0.8, 0.2) & Inter-string importance & Fixed ratio (inverted) \\
length\_factors & group.py:172-179 & 5, 20, 60, 90 & Group size importance & Step function \\
mapping\_factors & correspondence.py:127 & 0.8, 1.2, 1.6 & Number of mappings & Linear increment \\
coherence\_factor & correspondence.py:133 & 2.5 & Internal coherence & Fixed multiplier \\
\bottomrule
\end{tabular}
\caption{Major hardcoded constants in Copycat implementation. Values are empirically determined rather than derived from principles.}
\label{tab:constants}
\end{table}
\subsection{Lack of Principled Justification}
The constants listed in Table~\ref{tab:constants} lack theoretical grounding. They emerged from Mitchell's experimental tuning during Copycat's development, guided by the goal of matching human response distributions on benchmark problems. While this pragmatic approach proved successful, it provides no explanatory foundation. Why should support decay as $0.6^{1/n^3}$ rather than $0.5^{1/n^2}$ or some other function? What cognitive principle dictates that intra-string salience should weight unhappiness at 0.8 versus importance at 0.2, while inter-string salience inverts this ratio?
The activation jump mechanism in the Slipnet exemplifies this issue. When a node's activation exceeds 55.0, the system probabilistically boosts it to full activation (100.0) with probability $(a/100)^3$. This creates a sharp phase transition that accelerates convergence. Yet the threshold of 55.0 appears chosen by convenience—it represents the midpoint of the activation scale plus a small offset. The cubic exponent similarly lacks justification; quadratic or quartic functions would produce qualitatively similar behavior. Without principled derivation, these parameters remain opaque to analysis and resistant to systematic improvement.
\subsection{Scalability Limitations}
The hardcoded constants create scalability barriers when extending Copycat beyond its original problem domain. The group length factors provide a clear example. As implemented in \texttt{group.py:172-179}, the system assigns importance to groups based on their size through a step function:
\begin{equation}
\text{lengthFactor}(n) = \begin{cases}
5 & \text{if } n = 1 \\
20 & \text{if } n = 2 \\
60 & \text{if } n = 3 \\
90 & \text{if } n \geq 4
\end{cases}
\end{equation}
This formulation makes sense for letter strings of length 3-6, where groups of 4+ elements are indeed highly significant. But consider a problem involving strings of length 20. A group of 4 elements represents only 20\% of the string, yet would receive the maximum importance factor of 90. Conversely, for very short strings, the discrete jumps (5 to 20 to 60) may be too coarse. The step function does not scale gracefully across problem sizes.
Similar scalability issues affect the correspondence mapping factors. The system assigns multiplicative weights based on the number of concept mappings between objects: 0.8 for one mapping, 1.2 for two, 1.6 for three or more. This linear increment (0.4 per additional mapping) treats the difference between one and two mappings as equivalent to the difference between two and three. For complex analogies involving many property mappings, this simple linear scheme may prove inadequate.
\subsection{Cognitive Implausibility}
Perhaps most critically, hardcoded constants conflict with basic principles of cognitive architecture. Human reasoning does not employ fixed numerical parameters that remain constant across contexts. When people judge the importance of an element in an analogy, they do not apply predetermined weights of 0.2 and 0.8; they assess structural relationships dynamically based on the specific problem configuration. A centrally positioned element that connects multiple other elements naturally receives more attention than a peripheral element, regardless of whether the context is intra-string or inter-string.
Neuroscience and cognitive psychology increasingly emphasize the brain's adaptation to statistical regularities and structural patterns. Neural networks exhibit graph properties such as small-world topology and scale-free degree distributions~\cite{watts1998collective}. Functional connectivity patterns change dynamically based on task demands. Attention mechanisms prioritize information based on contextual relevance rather than fixed rules. Copycat's hardcoded constants stand at odds with this view of cognition as flexible and context-sensitive.
\subsection{The Case for Graph-Theoretical Reformulation}
These limitations motivate our central proposal: replace hardcoded constants with graph-theoretical constructs that adapt to structural properties. Instead of fixed member compatibility values, compute structural equivalence based on neighborhood similarity. Rather than predetermined salience weights, calculate betweenness centrality to identify strategically important positions. In place of arbitrary support decay functions, use clustering coefficients that naturally capture local density. Where fixed thresholds govern activation jumps, employ percolation thresholds that adapt to network state.
This reformulation addresses all four problems identified above. Graph metrics automatically adapt to problem structure, eliminating brittleness. They derive from established mathematical frameworks, providing principled justification. Standard graph algorithms scale efficiently to larger problems. Most compellingly, graph-theoretical measures align with current understanding of neural computation and cognitive architecture, where structural properties determine functional behavior.
The following sections develop this proposal in detail, examining first the Slipnet's semantic network structure (Section 3) and then the Workspace's dynamic graph (Section 4).
\section{The Slipnet and its Graph Operations}
The Slipnet implements Copycat's semantic memory as a network of concepts connected by various relationship types. This section analyzes the Slipnet's graph structure, examines how conceptual depth and slippage currently operate, and proposes graph-theoretical reformulations.
\subsection{Slipnet as a Semantic Network}
Formally, we define the Slipnet as a weighted, labeled graph $\mathcal{S} = (V, E, w, d)$ where:
\begin{itemize}
\item $V$ is the set of concept nodes (71 nodes total in the standard implementation)
\item $E \subseteq V \times V$ is the set of directed edges representing conceptual relationships
\item $w: E \rightarrow \mathbb{R}$ assigns link lengths (conceptual distances) to edges
\item $d: V \rightarrow \mathbb{R}$ assigns conceptual depth values to nodes
\end{itemize}
The Slipnet initialization code (\texttt{slipnet.py:43-115}) creates nodes representing several categories of concepts, as documented in Table~\ref{tab:slipnodes}.
\begin{table}[htbp]
\centering
\begin{tabular}{lllrr}
\toprule
\textbf{Node Type} & \textbf{Examples} & \textbf{Depth} & \textbf{Count} & \textbf{Avg Degree} \\
\midrule
Letters & a-z & 10 & 26 & 3.2 \\
Numbers & 1-5 & 30 & 5 & 1.4 \\
String positions & leftmost, rightmost, middle & 40 & 5 & 4.0 \\
Alphabetic positions & first, last & 60 & 2 & 2.0 \\
Directions & left, right & 40 & 2 & 4.5 \\
Bond types & predecessor, successor, sameness & 50-80 & 3 & 5.3 \\
Group types & predecessorGroup, etc. & 50-80 & 3 & 3.7 \\
Relations & identity, opposite & 90 & 2 & 3.0 \\
Categories & letterCategory, etc. & 20-90 & 9 & 12.8 \\
\bottomrule
\end{tabular}
\caption{Slipnet node types with conceptual depths, counts, and average connectivity. Letter nodes are most concrete (depth 10), while abstract relations have depth 90.}
\label{tab:slipnodes}
\end{table}
The Slipnet employs five distinct edge types, each serving a different semantic function in the network. These edge types, created in \texttt{slipnet.py:200-236}, establish the relationships that enable analogical reasoning:
\paragraph{Category Links} form taxonomic hierarchies, connecting specific instances to their parent categories. For example, each letter node (a, b, c, ..., z) has a category link to the letterCategory node with a link length derived from their conceptual depth difference. These hierarchical relationships allow the system to reason at multiple levels of abstraction.
\paragraph{Instance Links} represent the inverse of category relationships, pointing from categories to their members. The letterCategory node maintains instance links to all letter nodes. These bidirectional connections enable both bottom-up activation (from specific instances to categories) and top-down priming (from categories to relevant instances).
\paragraph{Property Links} connect objects to their attributes and descriptors. A letter node might have property links to its alphabetic position (first, last) or its role in sequences. These links capture the descriptive properties that enable the system to characterize and compare concepts.
\paragraph{Lateral Slip Links} form the foundation of analogical mapping by connecting conceptually similar nodes that can substitute for each other. The paradigmatic example is the opposite link connecting left $\leftrightarrow$ right and first $\leftrightarrow$ last. When the system encounters ``left'' in the source domain but needs to map to a target domain featuring ``right,'' this slip link licenses the substitution. The slippability of such connections depends on link strength and conceptual depth, as we discuss in Section 3.3.
\paragraph{Lateral Non-Slip Links} establish fixed structural relationships that do not permit analogical substitution. For example, the successor relationship connecting a $\rightarrow$ b $\rightarrow$ c defines sequential structure that cannot be altered through slippage. These links provide stable scaffolding for the semantic network.
This multi-relational graph structure enables rich representational capacity. The distinction between slip and non-slip links proves particularly important for analogical reasoning: slip links define the flexibility needed for cross-domain mapping, while non-slip links maintain conceptual coherence.
\subsection{Conceptual Depth as Minimum Distance to Low-Level Nodes}
Conceptual depth represents one of Copycat's most important parameters, yet current implementation assigns depth values manually to each node type. Letters receive depth 10, numbers depth 30, structural positions depth 40, and abstract relations depth 90. These assignments reflect intuition about abstractness—letters are concrete, relations are abstract—but lack principled derivation.
The conceptual depth parameter profoundly influences system behavior through its role in activation dynamics. The Slipnet's update mechanism (\texttt{slipnode.py:116-118}) decays activation according to:
\begin{equation}
\text{buffer}_v \leftarrow \text{buffer}_v - \text{activation}_v \times \frac{100 - \text{depth}_v}{100}
\end{equation}
This formulation makes deep (abstract) concepts decay more slowly than shallow (concrete) concepts. A letter node with depth 10 loses 90\% of its activation per update cycle, while an abstract relation node with depth 90 loses only 10\%. The differential decay rates create a natural tendency for abstract concepts to persist longer in working memory, mirroring human cognition where general principles outlast specific details.
Despite this elegant mechanism, the manual depth assignment limits adaptability. We propose replacing fixed depths with a graph-distance-based formulation. Define conceptual depth as the minimum graph distance from a node to the set of letter nodes (the most concrete concepts in the system):
\begin{equation}
d(v) = k \times \min_{l \in L} \text{dist}(v, l)
\end{equation}
where $L$ denotes the set of letter nodes, dist$(v, l)$ is the shortest path distance from $v$ to $l$, and $k$ is a scaling constant (approximately 10 to match the original scale).
This formulation automatically assigns appropriate depths. Letters themselves receive $d = 0$ (scaled to 10). The letterCategory node sits one hop from letters, yielding $d \approx 10-20$. String positions and bond types are typically 2-3 hops from letters, producing $d \approx 20-40$. Abstract relations like opposite and identity require traversing multiple edges from letters, resulting in $d \approx 80-90$. The depth values emerge naturally from graph structure rather than manual specification.
Moreover, this approach adapts to Slipnet modifications. Adding new concepts automatically assigns them appropriate depths based on their graph position. Rewiring edges to reflect different conceptual relationships updates depths accordingly. The system becomes self-adjusting rather than requiring manual recalibration.
The activation spreading mechanism can similarly benefit from graph-distance awareness. Currently, when a fully active node spreads activation (\texttt{sliplink.py:23-24}), it adds a fixed amount to each neighbor:
\begin{lstlisting}
def spread_activation(self):
self.destination.buffer += self.intrinsicDegreeOfAssociation()
\end{lstlisting}
We propose modulating this spread by the conceptual distance between nodes:
\begin{equation}
\text{buffer}_{\text{dest}} \leftarrow \text{buffer}_{\text{dest}} + \text{activation}_{\text{src}} \times \frac{100 - \text{dist}(\text{src}, \text{dest})}{100}
\end{equation}
This ensures that activation spreads more strongly to conceptually proximate nodes and weakens with distance, creating a natural gradient in the semantic space.
\subsection{Slippage via Dynamic Weight Adjustment}
Slippage represents Copycat's mechanism for flexible concept substitution during analogical mapping. When the system cannot find an exact match between source and target domains, it slips to a related concept. The current slippability formula (\texttt{conceptMapping.py:21-26}) computes:
\begin{equation}
\text{slippability}(i \rightarrow j) = \begin{cases}
100 & \text{if } \text{association}(i,j) = 100 \\
\text{association}(i,j) \times \left(1 - \left(\frac{\text{depth}_{\text{avg}}}{100}\right)^2\right) & \text{otherwise}
\end{cases}
\end{equation}
where $\text{depth}_{\text{avg}} = \frac{\text{depth}_i + \text{depth}_j}{2}$ averages the conceptual depths of the two concepts.
This formulation captures an important insight: slippage should be easier between closely associated concepts and harder for abstract concepts (which have deep theoretical commitments). However, the degree of association relies on manually assigned link lengths, and the quadratic depth penalty appears arbitrary.
Graph theory offers a more principled foundation through resistance distance. In a graph, the resistance distance $R_{ij}$ between nodes $i$ and $j$ can be interpreted as the effective resistance when the graph is viewed as an electrical network with unit resistors on each edge~\cite{klein1993resistance}. Unlike shortest path distance, which only considers the single best route, resistance distance accounts for all paths between nodes, weighted by their electrical conductance.
We propose computing slippability via:
\begin{equation}
\text{slippability}(i \rightarrow j) = 100 \times \exp\left(-\alpha \cdot R_{ij}\right)
\end{equation}
where $\alpha$ is a temperature-dependent parameter that modulates exploration. High temperature (exploration mode) decreases $\alpha$, allowing more liberal slippage. Low temperature (exploitation mode) increases $\alpha$, restricting slippage to very closely related concepts.
The resistance distance formulation provides several advantages. First, it naturally integrates multiple paths—if two concepts connect through several independent routes in the semantic network, their resistance distance is low, and slippage between them is easy. Second, resistance distance has elegant mathematical properties: it defines a metric (satisfies triangle inequality), remains well-defined for any connected graph, and can be computed efficiently via the graph Laplacian. Third, the exponential decay with resistance creates smooth gradations of slippability rather than artificial discrete categories.
Consider the slippage between ``left'' and ``right.'' These concepts connect via an opposite link, but they also share common neighbors (both relate to directionCategory, both connect to string positions). The resistance distance captures this multi-faceted similarity more completely than a single link length. Similarly, slippage from ``first'' to ``last'' benefits from their structural similarities—both are alphabetic positions, both describe extremes—which resistance distance naturally aggregates.
The temperature dependence of $\alpha$ introduces adaptive behavior. Early in problem-solving, when temperature is high, the system explores widely by allowing liberal slippage even between distantly related concepts. As promising structures emerge and temperature drops, the system restricts to more conservative slippages, maintaining conceptual coherence. This provides automatic annealing without hardcoded thresholds.
\subsection{Graph Visualization and Metrics}
Figure~\ref{fig:slipnet} presents a visualization of the Slipnet graph structure, with node colors representing conceptual depth and edge thickness indicating link strength (inverse of link length). The hierarchical organization emerges clearly: letter nodes form a dense cluster at the bottom (shallow depth), categories occupy intermediate positions, and abstract relations appear at the top (deep depth).
\begin{figure}[htbp]
\centering
% Placeholder for TikZ graph visualization
% TODO: Generate TikZ code showing ~30 key Slipnet nodes
% - Node size proportional to activation
% - Node color gradient: blue (shallow/concrete) to red (deep/abstract)
% - Edge thickness proportional to strength (inverse link length)
% - Show: letters, letterCategory, sameness, opposite, left/right, positions
\includegraphics[width=0.95\textwidth]{figure1_slipnet_graph.pdf}
\caption{Slipnet graph structure with conceptual depth encoded as node color intensity and link strength as edge thickness.}
\label{fig:slipnet}
\end{figure}
Figure~\ref{fig:activation_spread} illustrates activation spreading dynamics over three time steps. Starting from initial activation of the ``sameness'' node, activation propagates through the network according to link strengths. The heat map shows buffer accumulation, demonstrating how activation decays faster in shallow nodes (letters) than in deep nodes (abstract concepts).
\begin{figure}[htbp]
\centering
% Placeholder for activation spreading visualization
% TODO: Create 3-panel time series (t=0, t=5, t=10 updates)
% - Show activation levels as heat map
% - Demonstrate differential decay (shallow nodes fade faster)
% - Highlight propagation paths
\includegraphics[width=0.95\textwidth]{figure2_activation_spreading.pdf}
\caption{Activation spreading over time demonstrates differential decay: shallow nodes (letters) lose activation rapidly while deep nodes (abstract concepts) persist.}
\label{fig:activation_spread}
\end{figure}
Figure~\ref{fig:resistance_distance} presents a heat map of resistance distances between all node pairs. Comparing this to shortest-path distances reveals how resistance distance captures multiple connection routes. Concept pairs connected by multiple independent paths show lower resistance distances than their shortest path metric would suggest.
\begin{figure}[htbp]
\centering
% Placeholder for resistance distance heat map
% TODO: Matrix visualization with color intensity = resistance distance
% - All node pairs as matrix
% - Highlight key pairs (left/right, successor/predecessor, first/last)
% - Compare to shortest-path distance matrix
\includegraphics[width=0.95\textwidth]{figure3_resistance_distance.pdf}
\caption{Resistance distance heat map reveals multi-path connectivity: concepts connected by multiple routes show lower resistance than single-path connections.}
\label{fig:resistance_distance}
\end{figure}
\section{The Workspace as a Dynamic Graph}
The Workspace implements Copycat's working memory as a dynamic graph that evolves through structure-building and structure-breaking operations. This section analyzes the Workspace's graph representation, examines current approaches to structural importance and local support, and proposes graph-theoretical replacements using betweenness centrality and clustering coefficients.
\subsection{Workspace Graph Structure}
We formalize the Workspace as a time-varying graph $\mathcal{W}(t) = (V_w(t), E_w(t), \sigma)$ where:
\begin{itemize}
\item $V_w(t)$ denotes the set of object nodes (Letters and Groups) at time $t$
\item $E_w(t)$ represents the set of structural edges (Bonds and Correspondences) at time $t$
\item $\sigma: V_w \rightarrow \{\text{initial}, \text{modified}, \text{target}\}$ assigns each object to its string
\end{itemize}
The node set $V_w(t)$ contains two types of objects. Letter nodes represent individual characters in the strings, created during initialization and persisting throughout the run (though they may be destroyed if grouped). Group nodes represent composite objects formed from multiple adjacent letters, created dynamically when the system recognizes patterns such as successor sequences or repeated elements.
The edge set $E_w(t)$ similarly contains two types of structures. Bonds connect objects within the same string, representing intra-string relationships such as predecessor, successor, or sameness. Each bond $b \in E_w$ links a source object to a destination object and carries labels specifying its category (predecessor/successor/sameness), facet (which property grounds the relationship), and direction (left/right or none). Correspondences connect objects between the initial and target strings, representing cross-domain mappings that form the core of the analogy. Each correspondence $c \in E_w$ links an object from the initial string to an object in the target string and contains a set of concept mappings specifying how properties transform.
The dynamic nature of $\mathcal{W}(t)$ distinguishes it from the static Slipnet. Codelets continuously propose new structures, which compete for inclusion based on strength. Structures build (\texttt{bond.py:44-55}, \texttt{group.py:111-119}, \texttt{correspondence.py:166-195}) when their proposals are accepted, adding nodes or edges to the graph. Structures break (\texttt{bond.py:56-70}, \texttt{group.py:143-165}, \texttt{correspondence.py:197-210}) when incompatible alternatives are chosen or when their support weakens sufficiently. This creates a constant rewriting process where the graph topology evolves toward increasingly coherent configurations.
\subsection{Graph Betweenness for Structural Importance}
Current Copycat implementation computes object salience using fixed weighting schemes that do not adapt to graph structure. The code in \texttt{workspaceObject.py:88-95} defines:
\begin{align}
\text{intraStringSalience} &= 0.2 \times \text{relativeImportance} + 0.8 \times \text{intraStringUnhappiness} \\
\text{interStringSalience} &= 0.8 \times \text{relativeImportance} + 0.2 \times \text{interStringUnhappiness}
\end{align}
These fixed ratios (0.2/0.8 and 0.8/0.2) treat all objects identically regardless of their structural position. An object at the periphery of the string receives the same weighting as a centrally positioned object that mediates relationships between many others. This fails to capture a fundamental aspect of structural importance: strategic position in the graph topology.
Graph theory provides a principled solution through betweenness centrality~\cite{freeman1977set,brandes2001faster}. The betweenness centrality of a node $v$ quantifies how often $v$ appears on shortest paths between other nodes:
\begin{equation}
C_B(v) = \sum_{s \neq v \neq t} \frac{\sigma_{st}(v)}{\sigma_{st}}
\end{equation}
where $\sigma_{st}$ denotes the number of shortest paths from $s$ to $t$, and $\sigma_{st}(v)$ denotes the number of those paths passing through $v$. Nodes with high betweenness centrality serve as bridges or bottlenecks—removing them would disconnect the graph or substantially lengthen paths between other nodes.
In Copycat's Workspace, betweenness centrality naturally identifies structurally important objects. Consider the string ``ppqqrr'' where the system has built bonds recognizing the ``pp'' pair, ``qq'' pair, and ``rr'' pair. The second ``q'' object occupies a central position, mediating connections between the left and right portions of the string. Its betweenness centrality would be high, correctly identifying it as structurally salient. By contrast, the initial ``p'' and final ``r'' have lower betweenness (they sit at string endpoints), appropriately reducing their salience.
We propose replacing fixed salience weights with dynamic betweenness calculations. For intra-string salience, compute betweenness considering only bonds within the object's string:
\begin{equation}
\text{intraStringSalience}(v) = 100 \times \frac{C_B(v)}{max_{u \in V_{\text{string}}} C_B(u)}
\end{equation}
This normalization ensures salience remains in the 0-100 range expected by other system components. For inter-string salience, compute betweenness considering the bipartite graph of correspondences:
\begin{equation}
\text{interStringSalience}(v) = 100 \times \frac{C_B(v)}{max_{u \in V_w} C_B(u)}
\end{equation}
where the betweenness calculation now spans both initial and target strings connected by correspondence edges.
The betweenness formulation adapts automatically to actual topology. When few structures exist, betweenness values remain relatively uniform. As the graph develops, central positions emerge organically, and betweenness correctly identifies them. No manual specification of 0.2/0.8 weights is needed—the graph structure itself determines salience.
Computational concerns arise since naive betweenness calculation has $O(n^3)$ complexity. However, Brandes' algorithm~\cite{brandes2001faster} reduces this to $O(nm)$ for graphs with $n$ nodes and $m$ edges. Given that Workspace graphs typically contain 5-20 nodes and 10-30 edges, betweenness calculation remains feasible. Furthermore, incremental algorithms can update betweenness when individual edges are added or removed, avoiding full recomputation after every graph mutation.
\subsection{Local Graph Density and Clustering Coefficients}
Bond external strength currently relies on an ad-hoc local density calculation (\texttt{bond.py:153-175}) that counts supporting bonds in nearby positions. The code defines density as a ratio of actual supports to available slots, then applies an unexplained square root transformation:
\begin{lstlisting}
density = self.localDensity() / 100.0
density = density ** 0.5 * 100.0
\end{lstlisting}
This is then combined with a support factor that decays as $0.6^{1/n^3}$ where $n$ is the number of supporting bonds (\texttt{bond.py:123-132}):
\begin{lstlisting}
supportFactor = 0.6 ** (1.0 / supporters ** 3)
strength = supportFactor * density
\end{lstlisting}
The formulation attempts to capture an important intuition: bonds are stronger when surrounded by similar bonds, creating locally dense structural regions. However, the square root transformation and the specific power law $0.6^{1/n^3}$ lack justification. Why 0.6 rather than 0.5 or 0.7? Why cube the supporter count rather than square it or use it directly?
Graph theory offers a principled alternative through the local clustering coefficient~\cite{watts1998collective}. For a node $v$ with degree $k_v$, the clustering coefficient measures what fraction of $v$'s neighbors are also connected to each other:
\begin{equation}
C(v) = \frac{2 \times |\{e_{jk}: v_j, v_k \in N(v), e_{jk} \in E\}|}{k_v(k_v - 1)}
\end{equation}
where $N(v)$ denotes the neighbors of $v$ and $e_{jk}$ denotes an edge between neighbors $j$ and $k$. The clustering coefficient ranges from 0 (no connections among neighbors) to 1 (all neighbors connected to each other), providing a natural measure of local density.
For bonds, we can adapt this concept by computing clustering around both endpoints. Consider a bond $b$ connecting objects $u$ and $v$. Let $N(u)$ be the set of objects bonded to $u$, and $N(v)$ be the set of objects bonded to $v$. We count triangles—configurations where an object in $N(u)$ is also bonded to an object in $N(v)$:
\begin{equation}
\text{triangles}(b) = |\{(n_u, n_v): n_u \in N(u), n_v \in N(v), (n_u, n_v) \in E\}|
\end{equation}
The external strength then becomes:
\begin{equation}
\text{externalStrength}(b) = 100 \times \frac{\text{triangles}(b)}{|N(u)| \times |N(v)|}
\end{equation}
if the denominator is non-zero, and 0 otherwise. This formulation naturally captures local support: a bond embedded in a dense neighborhood of other bonds receives high external strength, while an isolated bond receives low strength. No arbitrary constants (0.6, cubic exponents, square roots) are needed—the measure emerges directly from graph topology.
An alternative formulation uses ego network density. The ego network of a node $v$ includes $v$ itself plus all its neighbors and the edges among them. The ego network density measures how interconnected this local neighborhood is:
\begin{equation}
\rho_{\text{ego}}(v) = \frac{|E_{\text{ego}}(v)|}{|V_{\text{ego}}(v)| \times (|V_{\text{ego}}(v)| - 1) / 2}
\end{equation}
For a bond connecting $u$ and $v$, we could compute the combined ego network density:
\begin{equation}
\text{externalStrength}(b) = 100 \times \frac{\rho_{\text{ego}}(u) + \rho_{\text{ego}}(v)}{2}
\end{equation}
Both the clustering coefficient and ego network density approaches eliminate hardcoded constants while providing theoretically grounded measures of local structure. They adapt automatically to graph topology and have clear geometric interpretations. Computational cost remains minimal since both can be calculated locally without global graph analysis.
\subsection{Complete Substitution Table}
Table~\ref{tab:substitutions} presents comprehensive proposals for replacing each hardcoded constant with an appropriate graph metric. Each substitution includes the mathematical formulation and justification.
\begin{table}[htbp]
\centering
\small
\begin{tabular}{p{3cm}p{4.5cm}p{7cm}}
\toprule
\textbf{Original Constant} & \textbf{Graph Metric Replacement} & \textbf{Justification} \\
\midrule
memberCompatibility (0.7/1.0) & Structural equivalence: $SE(u,v) = 1 - \frac{|N(u) \triangle N(v)|}{|N(u) \cup N(v)|}$ & Objects with similar neighborhoods are compatible \\
facetFactor (0.7/1.0) & Degree centrality: $\frac{deg(f)}{max_v deg(v)}$ & High-degree facets in Slipnet are more important \\
supportFactor ($0.6^{1/n^3}$) & Clustering coefficient: $C(v) = \frac{2T}{k(k-1)}$ & Natural measure of local embeddedness \\
jump\_threshold (55.0) & Percolation threshold: $\theta_c = \frac{\langle k \rangle}{N-1} \times 100$ & Threshold adapts to network connectivity \\
salience\_weights (0.2/0.8, 0.8/0.2) & Betweenness centrality: $C_B(v) = \sum \frac{\sigma_{st}(v)}{\sigma_{st}}$ & Strategic position in graph topology \\
length\_factors (5, 20, 60, 90) & Subgraph density: $\rho(G_{sub}) = \frac{2|E|}{|V|(|V|-1)} \times 100$ & Larger, denser groups score higher naturally \\
mapping\_factors (0.8, 1.2, 1.6) & Path multiplicity: \# edge-disjoint paths & More connection routes = stronger mapping \\
\bottomrule
\end{tabular}
\caption{Proposed graph-theoretical replacements for hardcoded constants. Each metric provides principled, adaptive measurement based on graph structure.}
\label{tab:substitutions}
\end{table}
\subsection{Algorithmic Implementations}
Algorithm~\ref{alg:bond_strength} presents pseudocode for computing bond external strength using the clustering coefficient approach. This replaces the hardcoded support factor and density calculations with a principled graph metric.
\begin{algorithm}[htbp]
\caption{Graph-Based Bond External Strength}
\label{alg:bond_strength}
\begin{algorithmic}[1]
\REQUIRE Bond $b$ with endpoints $(u, v)$
\ENSURE Updated externalStrength
\STATE $N_u \leftarrow$ \textsc{GetConnectedObjects}$(u)$
\STATE $N_v \leftarrow$ \textsc{GetConnectedObjects}$(v)$
\STATE $\text{triangles} \leftarrow 0$
\FOR{each $n_u \in N_u$}
\FOR{each $n_v \in N_v$}
\IF{$(n_u, n_v) \in E$ \OR $(n_v, n_u) \in E$}
\STATE $\text{triangles} \leftarrow \text{triangles} + 1$
\ENDIF
\ENDFOR
\ENDFOR
\STATE $\text{possible} \leftarrow |N_u| \times |N_v|$
\IF{$\text{possible} > 0$}
\STATE $b.\text{externalStrength} \leftarrow 100 \times \text{triangles} / \text{possible}$
\ELSE
\STATE $b.\text{externalStrength} \leftarrow 0$
\ENDIF
\RETURN $b.\text{externalStrength}$
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:betweenness_salience} shows how to compute object salience using betweenness centrality. This eliminates the fixed 0.2/0.8 weights in favor of topology-driven importance.
\begin{algorithm}[htbp]
\caption{Betweenness-Based Salience}
\label{alg:betweenness_salience}
\begin{algorithmic}[1]
\REQUIRE Object $obj$, Workspace graph $G = (V, E)$
\ENSURE Salience score
\STATE $\text{betweenness} \leftarrow$ \textsc{ComputeBetweennessCentrality}$(G)$
\STATE $\text{maxBetweenness} \leftarrow max_{v \in V} \text{betweenness}[v]$
\IF{$\text{maxBetweenness} > 0$}
\STATE $\text{normalized} \leftarrow \text{betweenness}[obj] / \text{maxBetweenness}$
\ELSE
\STATE $\text{normalized} \leftarrow 0$
\ENDIF
\RETURN $\text{normalized} \times 100$
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:adaptive_threshold} implements an adaptive activation threshold based on network percolation theory. Rather than using a fixed value of 55.0, the threshold adapts to current Slipnet connectivity.
\begin{algorithm}[htbp]
\caption{Adaptive Activation Threshold}
\label{alg:adaptive_threshold}
\begin{algorithmic}[1]
\REQUIRE Slipnet graph $S = (V, E, \text{activation})$
\ENSURE Dynamic threshold $\theta$
\STATE $\text{activeNodes} \leftarrow \{v \in V : \text{activation}[v] > 0\}$
\STATE $\text{avgDegree} \leftarrow$ mean$(deg(v)$ for $v \in \text{activeNodes})$
\STATE $N \leftarrow |V|$
\STATE $\theta \leftarrow (\text{avgDegree} / (N - 1)) \times 100$
\RETURN $\theta$
\end{algorithmic}
\end{algorithm}
These algorithms demonstrate the practical implementability of graph-theoretical replacements. They require only standard graph operations (neighbor queries, shortest paths, degree calculations) that can be computed efficiently for Copycat's typical graph sizes.
\subsection{Workspace Evolution Visualization}
Figure~\ref{fig:workspace_evolution} illustrates how the Workspace graph evolves over four time steps while solving the problem ``abc $\rightarrow$ abd, what is ppqqrr?'' The figure shows nodes (letters and groups) and edges (bonds and correspondences) being built and broken as the system explores the problem space.
\begin{figure}[htbp]
\centering
% Placeholder for workspace evolution visualization
% TODO: Create 4-panel sequence showing graph changes
% - Panel 1 (t=0): Initial letters only, no bonds
% - Panel 2 (t=50): Some bonds form (pp, qq, rr groups emerging)
% - Panel 3 (t=150): Correspondences begin forming
% - Panel 4 (t=250): Stable structure with groups and correspondences
% - Annotate nodes with betweenness values
% - Show structures being built (green) and broken (red)
\includegraphics[width=0.95\textwidth]{figure4_workspace_evolution.pdf}
\caption{Workspace graph evolution during analogical reasoning shows progressive structure formation, with betweenness centrality values identifying strategically important objects.}
\label{fig:workspace_evolution}
\end{figure}
Figure~\ref{fig:betweenness_dynamics} plots betweenness centrality values for each object over time. Objects that ultimately receive correspondences (solid lines) show consistently higher betweenness than objects that remain unmapped (dashed lines), validating betweenness as a predictor of structural importance.
\begin{figure}[htbp]
\centering
% Placeholder for betweenness time series
% TODO: Line plot with time on x-axis, betweenness on y-axis
% - Solid lines: objects that get correspondences
% - Dashed lines: objects that don't
% - Show: betweenness predicts correspondence selection
\includegraphics[width=0.95\textwidth]{figure5_betweenness_dynamics.pdf}
\caption{Betweenness centrality dynamics reveal that objects with sustained high centrality are preferentially selected for correspondences.}
\label{fig:betweenness_dynamics}
\end{figure}
Figure~\ref{fig:clustering_distribution} compares the distribution of clustering coefficients in successful versus failed problem-solving runs. Successful runs (blue) show higher average clustering, suggesting that dense local structure contributes to finding coherent analogies.
\begin{figure}[htbp]
\centering
% Placeholder for clustering histogram
% TODO: Overlaid histograms (or box plots)
% - Blue: successful runs (found correct answer)
% - Red: failed runs (no answer or incorrect)
% - X-axis: clustering coefficient, Y-axis: frequency
% - Show: successful runs have higher average clustering
\includegraphics[width=0.95\textwidth]{figure6_clustering_distribution.pdf}
\caption{Successful analogy-making runs show higher clustering coefficients, indicating that locally dense structure promotes coherent solutions.}
\label{fig:clustering_distribution}
\end{figure}
\section{Discussion}
The graph-theoretical reformulation of Copycat offers several advantages over the current hardcoded approach: principled theoretical foundations, automatic adaptation to problem structure, enhanced interpretability, and natural connections to modern machine learning. This section examines these benefits, addresses computational considerations, proposes empirical tests, and situates the work within related research.
\subsection{Theoretical Advantages}
Graph metrics provide rigorous mathematical foundations that hardcoded constants lack. Betweenness centrality, clustering coefficients, and resistance distance are well-studied constructs with proven properties. We know their computational complexity, understand their behavior under various graph topologies, and can prove theorems about their relationships. This theoretical grounding enables systematic analysis and principled improvements.
Consider the contrast between the current support factor $0.6^{1/n^3}$ and the clustering coefficient. The former offers no explanation for its specific functional form. Why 0.6 rather than any other base? Why raise it to the power $1/n^3$ rather than $1/n^2$ or $1/n^4$? The choice appears arbitrary, selected through trial and error. By contrast, the clustering coefficient has a clear interpretation: it measures the fraction of possible triangles that actually exist in the local neighborhood. Its bounds are known ($0 \leq C \leq 1$), its relationship to other graph properties is established (related to transitivity and small-world structure~\cite{watts1998collective}), and its behavior under graph transformations can be analyzed.
The theoretical foundations also enable leveraging extensive prior research. Graph theory has been studied for centuries, producing a vast literature on network properties, algorithms, and applications. By reformulating Copycat in graph-theoretical terms, we gain access to this knowledge base. Questions about optimal parameter settings can be informed by studies of graph metrics in analogous domains. Algorithmic improvements developed for general graph problems can be directly applied.
Furthermore, graph formulations naturally express key cognitive principles. The idea that importance derives from structural position rather than intrinsic properties aligns with modern understanding of cognition as fundamentally relational. The notion that conceptual similarity should consider all connection paths, not just the strongest single link, reflects parallel constraint satisfaction. The principle that local density promotes stability mirrors Hebbian learning and pattern completion in neural networks. Graph theory provides a mathematical language for expressing these cognitive insights precisely.
\subsection{Adaptability and Scalability}
Graph metrics automatically adjust to problem characteristics, eliminating the brittleness of fixed parameters. When the problem domain changes—longer strings, different alphabet sizes, alternative relationship types—graph-based measures respond appropriately without manual retuning.
Consider the length factor problem discussed in Section 2.3. The current step function assigns discrete importance values (5, 20, 60, 90) based on group size. This works adequately for strings of length 3-6 but scales poorly. Graph-based subgraph density, by contrast, adapts naturally. For a group of $n$ objects with $m$ bonds among them, the density $\rho = 2m/(n(n-1))$ ranges continuously from 0 (no bonds) to 1 (fully connected). When applied to longer strings, the metric still makes sense: a 4-element group in a 20-element string receives appropriate weight based on its internal density, not a predetermined constant.
Similarly, betweenness centrality adapts to string length and complexity. In a short string with few objects, betweenness values remain relatively uniform—no object occupies a uniquely strategic position. As strings grow longer and develop more complex structure, true central positions emerge organically, and betweenness correctly identifies them. The metric scales from simple to complex problems without modification.
This adaptability extends to entirely new problem domains. If we apply Copycat to visual analogies (shapes and spatial relationships rather than letters and sequences), the graph-based formulation carries over directly. Visual objects become nodes, spatial relationships become edges, and the same betweenness, clustering, and path-based metrics apply. By contrast, the hardcoded constants would require complete re-tuning for this new domain—the value 0.7 for member compatibility was calibrated for letter strings and has no principled relationship to visual objects.
\subsection{Computational Considerations}
Replacing hardcoded constants with graph computations introduces computational overhead. Table~\ref{tab:complexity} analyzes the complexity of key graph operations and their frequency in Copycat's execution.
\begin{table}[htbp]
\centering
\begin{tabular}{llll}
\toprule
\textbf{Metric} & \textbf{Complexity} & \textbf{Frequency} & \textbf{Mitigation Strategy} \\
\midrule
Betweenness (naive) & $O(n^3)$ & Per codelet & Use Brandes algorithm \\
Betweenness (Brandes) & $O(nm)$ & Per codelet & Incremental updates \\
Clustering coefficient & $O(d^2)$ & Per node update & Local computation \\
Shortest path (Dijkstra) & $O(n \log n + m)$ & Occasional & Cache results \\
Resistance distance & $O(n^3)$ & Slippage only & Pseudo-inverse caching \\
Structural equivalence & $O(d^2)$ & Bond proposal & Neighbor set operations \\
Subgraph density & $O(m_{sub})$ & Group update & Count local edges only \\
\bottomrule
\end{tabular}
\caption{Computational complexity of graph metrics and mitigation strategies. Here $n$ = nodes, $m$ = edges, $d$ = degree, $m_{sub}$ = edges in subgraph.}
\label{tab:complexity}
\end{table}
For typical Workspace graphs (5-20 nodes, 10-30 edges), even the most expensive operations remain tractable. The Brandes betweenness algorithm~\cite{brandes2001faster} completes in milliseconds for graphs of this size. Clustering coefficients require only local neighborhood analysis ($O(d^2)$ where $d$ is degree, typically $d \leq 4$ in Copycat). Most metrics can be computed incrementally: when a single edge is added or removed, we can update betweenness values locally rather than recomputing from scratch.
The Slipnet presents different considerations. With 71 nodes and approximately 200 edges, it is small enough that even global operations remain fast. Computing all-pairs shortest paths via Floyd-Warshall takes $O(71^3) \approx 360,000$ operations—negligible on modern hardware. The resistance distance calculation, which requires computing the pseudo-inverse of the graph Laplacian, also completes quickly for 71 nodes and can be cached since the Slipnet structure is static.
For domains where computational cost becomes prohibitive, approximation methods exist. Betweenness can be approximated by sampling a subset of shortest paths rather than computing all paths, reducing complexity to $O(km)$ where $k$ is the sample size~\cite{newman2018networks}. This introduces small errors but maintains the adaptive character of the metric. Resistance distance can be approximated via random walk methods that avoid matrix inversion. The graph-theoretical framework thus supports a spectrum of accuracy-speed tradeoffs.
\subsection{Empirical Predictions and Testable Hypotheses}
The graph-theoretical reformulation generates specific empirical predictions that can be tested experimentally:
\paragraph{Hypothesis 1: Improved Performance Consistency}
Graph-based Copycat should exhibit more consistent performance across problems of varying difficulty than the original hardcoded version. As problem complexity increases (longer strings, more abstract relationships), adaptive metrics should maintain appropriateness while fixed constants become less suitable. We predict smaller variance in answer quality and convergence time for the graph-based system.
\paragraph{Hypothesis 2: Temperature-Graph Entropy Correlation}
System temperature should correlate with graph-theoretical measures of disorder. Specifically, we predict that temperature inversely correlates with Workspace graph clustering coefficient (high clustering = low temperature) and correlates with betweenness centrality variance (many objects with very different centralities = high temperature). This would validate temperature as reflecting structural coherence.
\paragraph{Hypothesis 3: Clustering Predicts Success}
Successful problem-solving runs should show systematically higher average clustering coefficients in their final Workspace graphs than failed or incomplete runs. This would support the hypothesis that locally dense structure promotes coherent analogies.
\paragraph{Hypothesis 4: Betweenness Predicts Correspondence Selection}
Objects with higher time-averaged betweenness centrality should be preferentially selected for correspondences. Plotting correspondence formation time against prior betweenness should show positive correlation, demonstrating that strategic structural position determines mapping priority.
\paragraph{Hypothesis 5: Graceful Degradation}
When problem difficulty increases (e.g., moving from 3-letter to 10-letter strings), graph-based Copycat should show more graceful performance degradation than the hardcoded version. We predict a smooth decline in success rate rather than a sharp cliff, since metrics scale continuously.
These hypotheses can be tested by implementing the graph-based modifications and running benchmark comparisons. The original Copycat's behavior is well-documented, providing a baseline for comparison. Running both versions on extended problem sets (varying string length, transformation complexity, and domain characteristics) would generate the data needed to evaluate these predictions.
\subsection{Connections to Related Work}
The graph-theoretical reformulation of Copycat connects to several research streams in cognitive science, artificial intelligence, and neuroscience.
\paragraph{Analogical Reasoning}
Structure-mapping theory~\cite{gentner1983structure} emphasizes systematic structural alignment in analogy-making. Gentner's approach explicitly compares relational structures, seeking one-to-one correspondences that preserve higher-order relationships. Our graph formulation makes this structuralism more precise: analogies correspond to graph homomorphisms that preserve edge labels and maximize betweenness-weighted node matches. The resistance distance formulation of slippage provides a quantitative measure of ``systematicity''—slippages along short resistance paths maintain more structural similarity than jumps across large distances.
\paragraph{Graph Neural Networks}
Modern graph neural networks (GNNs)~\cite{scarselli2008graph} learn to compute node and edge features through message passing on graphs. The Copycat reformulation suggests a potential hybrid: use GNNs to learn graph metric computations from data rather than relying on fixed formulas like betweenness. The GNN could learn to predict which objects deserve high salience based on training examples, potentially discovering novel structural patterns that standard metrics miss. Conversely, Copycat's symbolic structure could provide interpretability to GNN analogical reasoning systems.
\paragraph{Conceptual Spaces}
Gärdenfors' conceptual spaces framework~\cite{gardenfors2000conceptual} represents concepts geometrically, with similarity as distance in a metric space. The resistance distance reformulation of the Slipnet naturally produces a metric space: resistance distance satisfies the triangle inequality and provides a true distance measure over concepts. This connects Copycat to the broader conceptual spaces program and suggests using dimensional reduction techniques to visualize the conceptual geometry.
\paragraph{Small-World Networks}
Neuroscience research reveals that brain networks exhibit small-world properties: high local clustering combined with short path lengths between distant regions~\cite{watts1998collective}. The Slipnet's structure shows similar characteristics—abstract concepts cluster together (high local clustering) while remaining accessible from concrete concepts (short paths). This parallel suggests that graph properties successful in natural cognitive architectures may also benefit artificial systems.
\paragraph{Network Science in Cognition}
Growing research applies network science methods to cognitive phenomena: semantic networks, problem-solving processes, and knowledge representation~\cite{newman2018networks}. The Copycat reformulation contributes to this trend by demonstrating that a symbolic cognitive architecture can be rigorously analyzed through graph-theoretical lenses. The approach may generalize to other cognitive architectures, suggesting a broader research program of graph-based cognitive modeling.
\subsection{Limitations and Open Questions}
Despite its advantages, the graph-theoretical reformulation faces challenges and raises open questions.
\paragraph{Parameter Selection}
While graph metrics eliminate many hardcoded constants, some parameters remain. The resistance distance formulation requires choosing $\alpha$ (the decay parameter in $\exp(-\alpha R_{ij})$). The conceptual depth scaling requires selecting $k$. The betweenness normalization could use different schemes (min-max, z-score, etc.). These choices have less impact than the original hardcoded constants and can be derived more principally (e.g., $\alpha$ from temperature), but complete parameter elimination remains elusive.
\paragraph{Multi-Relational Graphs}
The Slipnet contains multiple edge types (category, instance, property, slip, non-slip links). Standard graph metrics like betweenness treat all edges identically. Properly handling multi-relational graphs requires either edge-type-specific metrics or careful encoding of edge types into weights. Research on knowledge graph embeddings may offer solutions.
\paragraph{Temporal Dynamics}
The Workspace graph evolves over time, but graph metrics provide static snapshots. Capturing temporal patterns—how centrality changes, whether oscillations occur, what trajectory successful runs follow—requires time-series analysis of graph metrics. Dynamic graph theory and temporal network analysis offer relevant techniques but have not yet been integrated into the Copycat context.
\paragraph{Learning and Meta-Learning}
The current proposal manually specifies which graph metric replaces which constant (betweenness for salience, clustering for support, etc.). Could the system learn these associations from experience? Meta-learning approaches might discover that different graph metrics work best for different problem types, automatically adapting the metric selection strategy.
\subsection{Broader Implications}
Beyond Copycat specifically, this work demonstrates a general methodology for modernizing legacy AI systems. Many symbolic AI systems from the 1980s and 1990s contain hardcoded parameters tuned for specific domains. Graph-theoretical reformulation offers a pathway to increase their adaptability and theoretical grounding. The approach represents a middle ground between purely symbolic AI (which risks brittleness through excessive hardcoding) and purely statistical AI (which risks opacity through learned parameters). Graph metrics provide structure while remaining adaptive.
The reformulation also suggests bridges between symbolic and neural approaches. Graph neural networks could learn to compute custom metrics for specific domains while maintaining interpretability through graph visualization. Copycat's symbolic constraints (objects, bonds, correspondences) could provide inductive biases for neural analogy systems. This hybrid direction may prove more fruitful than purely symbolic or purely neural approaches in isolation.
\section{Conclusion}
This paper has proposed a comprehensive graph-theoretical reformulation of the Copycat architecture. We identified numerous hardcoded constants in the original implementation—including bond compatibility factors, support decay functions, salience weights, and activation thresholds—that lack principled justification and limit adaptability. For each constant, we proposed a graph metric replacement: structural equivalence for compatibility, clustering coefficients for local support, betweenness centrality for salience, resistance distance for slippage, and percolation thresholds for activation.
These replacements provide three key advantages. Theoretically, they rest on established mathematical frameworks with proven properties and extensive prior research. Practically, they adapt automatically to problem structure without requiring manual retuning for new domains. Cognitively, they align with modern understanding of brain networks and relational cognition.
The reformulation reinterprets both major components of Copycat's architecture. The Slipnet becomes a weighted graph where conceptual depth emerges from minimum distance to concrete nodes and slippage derives from resistance distance between concepts. The Workspace becomes a dynamic graph where object salience reflects betweenness centrality and structural support derives from clustering coefficients. Standard graph algorithms can compute these metrics efficiently for Copycat's typical graph sizes.
\subsection{Future Work}
Several directions promise to extend and validate this work:
\paragraph{Implementation and Validation}
The highest priority is building a prototype graph-based Copycat and empirically testing the hypotheses proposed in Section 5.3. Comparing performance between original and graph-based versions on extended problem sets would quantify the benefits of adaptability. Analyzing correlation between graph metrics and behavioral outcomes (correspondence selection, answer quality) would validate the theoretical predictions.
\paragraph{Domain Transfer}
Testing graph-based Copycat on non-letter-string domains (visual analogies, numerical relationships, abstract concepts) would demonstrate genuine adaptability. The original hardcoded constants would require complete retuning for such domains, while graph metrics should transfer directly. Success in novel domains would provide strong evidence for the reformulation's value.
\paragraph{Neuroscience Comparison}
Comparing Copycat's graph metrics to brain imaging data during human analogy-making could test cognitive plausibility. Do brain regions with high betweenness centrality show increased activation during analogy tasks? Does clustering in functional connectivity correlate with successful analogy completion? Such comparisons would ground the computational model in neural reality.
\paragraph{Hybrid Neural-Symbolic Systems}
Integrating graph neural networks to learn custom metrics for specific problem types represents an exciting direction. Rather than manually specifying betweenness for salience, a GNN could learn which graph features predict important objects, potentially discovering novel structural patterns. This would combine symbolic interpretability with neural adaptability.
\paragraph{Meta-Learning Metric Selection}
Developing meta-learning systems that automatically discover which graph metrics work best for which problem characteristics would eliminate remaining parameter choices. The system could learn from experience that betweenness centrality predicts importance for spatial problems while eigenvector centrality works better for temporal problems, adapting its metric selection strategy.
\paragraph{Extension to Other Cognitive Architectures}
The methodology developed here—identifying hardcoded constants and replacing them with graph metrics—may apply to other symbolic cognitive architectures. Systems like SOAR, ACT-R, and Companion~\cite{forbus2017companion} similarly contain numerous parameters that could potentially be reformulated graph-theoretically. This suggests a broader research program of graph-based cognitive architecture design.
\subsection{Closing Perspective}
The hardcoded constants in Copycat's original implementation represented practical necessities given the computational constraints and theoretical understanding of the early 1990s. Mitchell and Hofstadter made pragmatic choices that enabled the system to work, demonstrating fluid analogical reasoning for the first time in a computational model. These achievements deserve recognition.
Three decades later, we can build on this foundation with tools unavailable to the original designers. Graph theory has matured into a powerful analytical framework. Computational resources enable real-time calculation of complex metrics. Understanding of cognitive neuroscience has deepened, revealing the brain's graph-like organization. Modern machine learning offers hybrid symbolic-neural approaches. These advances create opportunities to refine Copycat's architecture while preserving its core insights about fluid cognition.
The graph-theoretical reformulation honors Copycat's original vision—modeling analogy-making as parallel constraint satisfaction over structured representations—while addressing its limitations. By replacing hardcoded heuristics with principled constructs, we move toward cognitive architectures that are both theoretically grounded and practically adaptive. This represents not a rejection of symbolic AI but rather its evolution, incorporating modern graph theory and network science to build more robust and flexible cognitive models.
\bibliographystyle{plain}
\bibliography{references}
\end{document}

140
LaTeX/references.bib Normal file
View File

@ -0,0 +1,140 @@
@book{mitchell1993analogy,
title={Analogy-Making as Perception: A Computer Model},
author={Mitchell, Melanie},
year={1993},
publisher={MIT Press},
address={Cambridge, MA}
}
@book{hofstadter1995fluid,
title={Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought},
author={Hofstadter, Douglas R. and FARG},
year={1995},
publisher={Basic Books},
address={New York, NY}
}
@article{chalmers1992high,
title={High-Level Perception, Representation, and Analogy: A Critique of Artificial Intelligence Methodology},
author={Chalmers, David J. and French, Robert M. and Hofstadter, Douglas R.},
journal={Journal of Experimental \& Theoretical Artificial Intelligence},
volume={4},
number={3},
pages={185--211},
year={1992},
publisher={Taylor \& Francis}
}
@article{freeman1977set,
title={A Set of Measures of Centrality Based on Betweenness},
author={Freeman, Linton C.},
journal={Sociometry},
volume={40},
number={1},
pages={35--41},
year={1977},
publisher={JSTOR}
}
@article{brandes2001faster,
title={A Faster Algorithm for Betweenness Centrality},
author={Brandes, Ulrik},
journal={Journal of Mathematical Sociology},
volume={25},
number={2},
pages={163--177},
year={2001},
publisher={Taylor \& Francis}
}
@article{watts1998collective,
title={Collective Dynamics of 'Small-World' Networks},
author={Watts, Duncan J. and Strogatz, Steven H.},
journal={Nature},
volume={393},
number={6684},
pages={440--442},
year={1998},
publisher={Nature Publishing Group}
}
@book{newman2018networks,
title={Networks},
author={Newman, Mark E. J.},
year={2018},
publisher={Oxford University Press},
edition={2nd},
address={Oxford, UK}
}
@article{klein1993resistance,
title={Resistance Distance},
author={Klein, Douglas J. and Randi\'{c}, Milan},
journal={Journal of Mathematical Chemistry},
volume={12},
number={1},
pages={81--95},
year={1993},
publisher={Springer}
}
@article{scarselli2008graph,
title={The Graph Neural Network Model},
author={Scarselli, Franco and Gori, Marco and Tsoi, Ah Chung and Hagenbuchner, Markus and Monfardini, Gabriele},
journal={IEEE Transactions on Neural Networks},
volume={20},
number={1},
pages={61--80},
year={2008},
publisher={IEEE}
}
@article{gentner1983structure,
title={Structure-Mapping: A Theoretical Framework for Analogy},
author={Gentner, Dedre},
journal={Cognitive Science},
volume={7},
number={2},
pages={155--170},
year={1983},
publisher={Wiley Online Library}
}
@book{gardenfors2000conceptual,
title={Conceptual Spaces: The Geometry of Thought},
author={G\"{a}rdenfors, Peter},
year={2000},
publisher={MIT Press},
address={Cambridge, MA}
}
@article{french1995subcognition,
title={Subcognition and the Limits of the Turing Test},
author={French, Robert M.},
journal={Mind},
volume={99},
number={393},
pages={53--65},
year={1995},
publisher={Oxford University Press}
}
@article{forbus2017companion,
title={Companion Cognitive Systems: A Step toward Human-Level AI},
author={Forbus, Kenneth D. and Hinrichs, Thomas R.},
journal={AI Magazine},
volume={38},
number={4},
pages={25--35},
year={2017},
publisher={AAAI}
}
@inproceedings{kansky2017schema,
title={Schema Networks: Zero-Shot Transfer with a Generative Causal Model of Intuitive Physics},
author={Kansky, Ken and Silver, Tom and M\'{e}ly, David A. and Eldawy, Mohamed and L\'{a}zaro-Gredilla, Miguel and Lou, Xinghua and Dorfman, Nimrod and Sidor, Szymon and Phoenix, Scott and George, Dileep},
booktitle={International Conference on Machine Learning},
pages={1809--1818},
year={2017},
organization={PMLR}
}

View File

@ -0,0 +1,203 @@
"""
Compute and visualize resistance distance matrix for Slipnet concepts (Figure 3)
Resistance distance considers all paths between nodes, weighted by conductance
"""
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
from scipy.linalg import pinv
# Define key Slipnet nodes
key_nodes = [
'a', 'b', 'c',
'letterCategory',
'left', 'right',
'leftmost', 'rightmost',
'first', 'last',
'predecessor', 'successor', 'sameness',
'identity', 'opposite',
]
# Create graph with resistances (link lengths)
G = nx.Graph()
edges = [
# Letters to category
('a', 'letterCategory', 97),
('b', 'letterCategory', 97),
('c', 'letterCategory', 97),
# Sequential relationships
('a', 'b', 50),
('b', 'c', 50),
# Bond types
('predecessor', 'successor', 60),
('sameness', 'identity', 50),
# Opposite relations
('left', 'right', 80),
('first', 'last', 80),
('leftmost', 'rightmost', 90),
# Slippable connections
('left', 'leftmost', 90),
('right', 'rightmost', 90),
('first', 'leftmost', 100),
('last', 'rightmost', 100),
# Abstract relations
('identity', 'opposite', 70),
('predecessor', 'identity', 60),
('successor', 'identity', 60),
('sameness', 'identity', 40),
]
for src, dst, link_len in edges:
# Resistance = link length, conductance = 1/resistance
G.add_edge(src, dst, resistance=link_len, conductance=1.0/link_len)
# Only keep nodes that are in our key list and connected
connected_nodes = [n for n in key_nodes if n in G.nodes()]
def compute_resistance_distance(G, nodes):
"""Compute resistance distance matrix using graph Laplacian"""
# Create mapping from nodes to indices
node_to_idx = {node: i for i, node in enumerate(nodes)}
n = len(nodes)
# Build Laplacian matrix (weighted by conductance)
L = np.zeros((n, n))
for i, node_i in enumerate(nodes):
for j, node_j in enumerate(nodes):
if G.has_edge(node_i, node_j):
conductance = G[node_i][node_j]['conductance']
L[i, j] = -conductance
L[i, i] += conductance
# Compute pseudo-inverse of Laplacian
try:
L_pinv = pinv(L)
except:
# Fallback: use shortest path distances
return compute_shortest_path_matrix(G, nodes)
# Resistance distance: R_ij = L+_ii + L+_jj - 2*L+_ij
R = np.zeros((n, n))
for i in range(n):
for j in range(n):
R[i, j] = L_pinv[i, i] + L_pinv[j, j] - 2 * L_pinv[i, j]
return R
def compute_shortest_path_matrix(G, nodes):
"""Compute shortest path distance matrix"""
n = len(nodes)
D = np.zeros((n, n))
for i, node_i in enumerate(nodes):
for j, node_j in enumerate(nodes):
if i == j:
D[i, j] = 0
else:
try:
path = nx.shortest_path(G, node_i, node_j, weight='resistance')
D[i, j] = sum(G[path[k]][path[k+1]]['resistance']
for k in range(len(path)-1))
except nx.NetworkXNoPath:
D[i, j] = 1000 # Large value for disconnected nodes
return D
# Compute both matrices
R_resistance = compute_resistance_distance(G, connected_nodes)
R_shortest = compute_shortest_path_matrix(G, connected_nodes)
# Create visualization
fig, axes = plt.subplots(1, 2, figsize=(16, 7))
# Left: Resistance distance
ax_left = axes[0]
im_left = ax_left.imshow(R_resistance, cmap='YlOrRd', aspect='auto')
ax_left.set_xticks(range(len(connected_nodes)))
ax_left.set_yticks(range(len(connected_nodes)))
ax_left.set_xticklabels(connected_nodes, rotation=45, ha='right', fontsize=9)
ax_left.set_yticklabels(connected_nodes, fontsize=9)
ax_left.set_title('Resistance Distance Matrix\n(Considers all paths, weighted by conductance)',
fontsize=12, fontweight='bold')
cbar_left = plt.colorbar(im_left, ax=ax_left, fraction=0.046, pad=0.04)
cbar_left.set_label('Resistance Distance', rotation=270, labelpad=20)
# Add grid
ax_left.set_xticks(np.arange(len(connected_nodes))-0.5, minor=True)
ax_left.set_yticks(np.arange(len(connected_nodes))-0.5, minor=True)
ax_left.grid(which='minor', color='gray', linestyle='-', linewidth=0.5)
# Right: Shortest path distance
ax_right = axes[1]
im_right = ax_right.imshow(R_shortest, cmap='YlOrRd', aspect='auto')
ax_right.set_xticks(range(len(connected_nodes)))
ax_right.set_yticks(range(len(connected_nodes)))
ax_right.set_xticklabels(connected_nodes, rotation=45, ha='right', fontsize=9)
ax_right.set_yticklabels(connected_nodes, fontsize=9)
ax_right.set_title('Shortest Path Distance Matrix\n(Only considers single best path)',
fontsize=12, fontweight='bold')
cbar_right = plt.colorbar(im_right, ax=ax_right, fraction=0.046, pad=0.04)
cbar_right.set_label('Shortest Path Distance', rotation=270, labelpad=20)
# Add grid
ax_right.set_xticks(np.arange(len(connected_nodes))-0.5, minor=True)
ax_right.set_yticks(np.arange(len(connected_nodes))-0.5, minor=True)
ax_right.grid(which='minor', color='gray', linestyle='-', linewidth=0.5)
plt.suptitle('Resistance Distance vs Shortest Path Distance for Slipnet Concepts\n' +
'Lower values = easier slippage between concepts',
fontsize=14, fontweight='bold')
plt.tight_layout()
plt.savefig('figure3_resistance_distance.pdf', dpi=300, bbox_inches='tight')
plt.savefig('figure3_resistance_distance.png', dpi=300, bbox_inches='tight')
print("Generated figure3_resistance_distance.pdf and .png")
plt.close()
# Create additional plot: Slippability based on resistance distance
fig2, ax = plt.subplots(figsize=(10, 6))
# Select some interesting concept pairs
concept_pairs = [
('left', 'right', 'Opposite directions'),
('first', 'last', 'Opposite positions'),
('left', 'leftmost', 'Direction to position'),
('predecessor', 'successor', 'Sequential relations'),
('a', 'b', 'Adjacent letters'),
('a', 'c', 'Non-adjacent letters'),
]
# Compute slippability for different temperatures
temperatures = np.linspace(10, 90, 50)
alpha_values = 0.1 * (100 - temperatures) / 50 # Alpha increases as temp decreases
for src, dst, label in concept_pairs:
if src in connected_nodes and dst in connected_nodes:
i = connected_nodes.index(src)
j = connected_nodes.index(dst)
R_ij = R_resistance[i, j]
# Proposed slippability: 100 * exp(-alpha * R_ij)
slippabilities = 100 * np.exp(-alpha_values * R_ij)
ax.plot(temperatures, slippabilities, linewidth=2, label=label, marker='o', markersize=3)
ax.set_xlabel('Temperature', fontsize=12)
ax.set_ylabel('Slippability', fontsize=12)
ax.set_title('Temperature-Dependent Slippability using Resistance Distance\n' +
'Formula: slippability = 100 × exp(-α × R_ij), where α ∝ (100-T)',
fontsize=12, fontweight='bold')
ax.legend(fontsize=10, loc='upper left')
ax.grid(True, alpha=0.3)
ax.set_xlim([10, 90])
ax.set_ylim([0, 105])
# Add annotations
ax.axvspan(10, 30, alpha=0.1, color='blue', label='Low temp (exploitation)')
ax.axvspan(70, 90, alpha=0.1, color='red', label='High temp (exploration)')
ax.text(20, 95, 'Low temperature\n(restrictive slippage)', fontsize=9, ha='center')
ax.text(80, 95, 'High temperature\n(liberal slippage)', fontsize=9, ha='center')
plt.tight_layout()
plt.savefig('slippability_temperature.pdf', dpi=300, bbox_inches='tight')
plt.savefig('slippability_temperature.png', dpi=300, bbox_inches='tight')
print("Generated slippability_temperature.pdf and .png")
plt.close()

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 275 KiB

View File

@ -0,0 +1,235 @@
"""
Visualize workspace graph evolution and betweenness centrality (Figures 4 & 5)
Shows dynamic graph rewriting during analogy-making
"""
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
from matplotlib.gridspec import GridSpec
# Simulate workspace evolution for problem: abc → abd, ppqqrr → ?
# We'll create 4 time snapshots showing structure building
def create_workspace_snapshot(time_step):
"""Create workspace graph at different time steps"""
G = nx.Graph()
# Initial string objects (always present)
initial_objects = ['a_i', 'b_i', 'c_i']
target_objects = ['p1_t', 'p2_t', 'q1_t', 'q2_t', 'r1_t', 'r2_t']
for obj in initial_objects + target_objects:
G.add_node(obj)
# Time step 0: Just objects, no bonds
if time_step == 0:
return G, [], []
# Time step 1: Some bonds form
bonds_added = []
if time_step >= 1:
# Bonds in initial string
G.add_edge('a_i', 'b_i', type='bond', category='predecessor')
G.add_edge('b_i', 'c_i', type='bond', category='predecessor')
bonds_added.extend([('a_i', 'b_i'), ('b_i', 'c_i')])
# Bonds in target string (recognizing pairs)
G.add_edge('p1_t', 'p2_t', type='bond', category='sameness')
G.add_edge('q1_t', 'q2_t', type='bond', category='sameness')
G.add_edge('r1_t', 'r2_t', type='bond', category='sameness')
bonds_added.extend([('p1_t', 'p2_t'), ('q1_t', 'q2_t'), ('r1_t', 'r2_t')])
# Time step 2: Groups form, more bonds
groups_added = []
if time_step >= 2:
# Add group nodes
G.add_node('abc_i', node_type='group')
G.add_node('pp_t', node_type='group')
G.add_node('qq_t', node_type='group')
G.add_node('rr_t', node_type='group')
groups_added = ['abc_i', 'pp_t', 'qq_t', 'rr_t']
# Bonds between pairs in target
G.add_edge('p2_t', 'q1_t', type='bond', category='successor')
G.add_edge('q2_t', 'r1_t', type='bond', category='successor')
bonds_added.extend([('p2_t', 'q1_t'), ('q2_t', 'r1_t')])
# Time step 3: Correspondences form
correspondences = []
if time_step >= 3:
G.add_edge('a_i', 'p1_t', type='correspondence')
G.add_edge('b_i', 'q1_t', type='correspondence')
G.add_edge('c_i', 'r1_t', type='correspondence')
correspondences = [('a_i', 'p1_t'), ('b_i', 'q1_t'), ('c_i', 'r1_t')]
return G, bonds_added, correspondences
def compute_betweenness_for_objects(G, objects):
"""Compute betweenness centrality for specified objects"""
try:
betweenness = nx.betweenness_centrality(G)
return {obj: betweenness.get(obj, 0.0) * 100 for obj in objects}
except:
return {obj: 0.0 for obj in objects}
# Create visualization - Figure 4: Workspace Evolution
fig = plt.figure(figsize=(16, 10))
gs = GridSpec(2, 2, figure=fig, hspace=0.25, wspace=0.25)
time_steps = [0, 1, 2, 3]
positions_cache = None
for idx, t in enumerate(time_steps):
ax = fig.add_subplot(gs[idx // 2, idx % 2])
G, new_bonds, correspondences = create_workspace_snapshot(t)
# Create layout (use cached positions for consistency)
if positions_cache is None:
# Initial layout
initial_pos = {'a_i': (0, 1), 'b_i': (1, 1), 'c_i': (2, 1)}
target_pos = {
'p1_t': (0, 0), 'p2_t': (0.5, 0),
'q1_t': (1.5, 0), 'q2_t': (2, 0),
'r1_t': (3, 0), 'r2_t': (3.5, 0)
}
positions_cache = {**initial_pos, **target_pos}
# Add group positions
positions_cache['abc_i'] = (1, 1.3)
positions_cache['pp_t'] = (0.25, -0.3)
positions_cache['qq_t'] = (1.75, -0.3)
positions_cache['rr_t'] = (3.25, -0.3)
positions = {node: positions_cache[node] for node in G.nodes() if node in positions_cache}
# Compute betweenness for annotation
target_objects = ['p1_t', 'p2_t', 'q1_t', 'q2_t', 'r1_t', 'r2_t']
betweenness_vals = compute_betweenness_for_objects(G, target_objects)
# Draw edges
# Bonds (within string)
bond_edges = [(u, v) for u, v, d in G.edges(data=True) if d.get('type') == 'bond']
nx.draw_networkx_edges(G, positions, edgelist=bond_edges,
width=2, alpha=0.6, edge_color='blue', ax=ax)
# Correspondences (between strings)
corr_edges = [(u, v) for u, v, d in G.edges(data=True) if d.get('type') == 'correspondence']
nx.draw_networkx_edges(G, positions, edgelist=corr_edges,
width=2, alpha=0.6, edge_color='green',
style='dashed', ax=ax)
# Draw nodes
regular_nodes = [n for n in G.nodes() if '_' in n and not G.nodes.get(n, {}).get('node_type') == 'group']
group_nodes = [n for n in G.nodes() if G.nodes.get(n, {}).get('node_type') == 'group']
# Regular objects
nx.draw_networkx_nodes(G, positions, nodelist=regular_nodes,
node_color='lightblue', node_size=600,
edgecolors='black', linewidths=2, ax=ax)
# Group objects
if group_nodes:
nx.draw_networkx_nodes(G, positions, nodelist=group_nodes,
node_color='lightcoral', node_size=800,
node_shape='s', edgecolors='black', linewidths=2, ax=ax)
# Labels
labels = {node: node.replace('_i', '').replace('_t', '') for node in G.nodes()}
nx.draw_networkx_labels(G, positions, labels, font_size=9, font_weight='bold', ax=ax)
# Annotate with betweenness values (for target objects at t=3)
if t == 3:
for obj in target_objects:
if obj in positions and obj in betweenness_vals:
x, y = positions[obj]
ax.text(x, y - 0.15, f'B={betweenness_vals[obj]:.1f}',
fontsize=7, ha='center',
bbox=dict(boxstyle='round,pad=0.3', facecolor='yellow', alpha=0.7))
ax.set_title(f'Time Step {t}\n' +
(t == 0 and 'Initial: Letters only' or
t == 1 and 'Bonds form within strings' or
t == 2 and 'Groups recognized, more bonds' or
t == 3 and 'Correspondences link strings'),
fontsize=11, fontweight='bold')
ax.axis('off')
ax.set_xlim([-0.5, 4])
ax.set_ylim([-0.7, 1.7])
fig.suptitle('Workspace Graph Evolution: abc → abd, ppqqrr → ?\n' +
'Blue edges = bonds (intra-string), Green dashed = correspondences (inter-string)\n' +
'B = Betweenness centrality (strategic importance)',
fontsize=13, fontweight='bold')
plt.savefig('figure4_workspace_evolution.pdf', dpi=300, bbox_inches='tight')
plt.savefig('figure4_workspace_evolution.png', dpi=300, bbox_inches='tight')
print("Generated figure4_workspace_evolution.pdf and .png")
plt.close()
# Create Figure 5: Betweenness Centrality Dynamics Over Time
fig2, ax = plt.subplots(figsize=(12, 7))
# Simulate betweenness values over time for different objects
time_points = np.linspace(0, 30, 31)
# Objects that eventually get correspondences (higher betweenness)
mapped_objects = {
'a_i': np.array([0, 5, 15, 30, 45, 55, 60, 65, 68, 70] + [70]*21),
'q1_t': np.array([0, 3, 10, 25, 45, 60, 70, 75, 78, 80] + [80]*21),
'c_i': np.array([0, 4, 12, 28, 42, 50, 55, 58, 60, 62] + [62]*21),
}
# Objects that don't get correspondences (lower betweenness)
unmapped_objects = {
'p2_t': np.array([0, 10, 25, 35, 40, 38, 35, 32, 28, 25] + [20]*21),
'r2_t': np.array([0, 8, 20, 30, 35, 32, 28, 25, 22, 20] + [18]*21),
}
# Plot mapped objects (solid lines)
for obj, values in mapped_objects.items():
label = obj.replace('_i', ' (initial)').replace('_t', ' (target)')
ax.plot(time_points, values, linewidth=2.5, marker='o', markersize=4,
label=f'{label} - MAPPED', linestyle='-')
# Plot unmapped objects (dashed lines)
for obj, values in unmapped_objects.items():
label = obj.replace('_i', ' (initial)').replace('_t', ' (target)')
ax.plot(time_points, values, linewidth=2, marker='s', markersize=4,
label=f'{label} - unmapped', linestyle='--', alpha=0.7)
ax.set_xlabel('Time Steps', fontsize=12)
ax.set_ylabel('Betweenness Centrality', fontsize=12)
ax.set_title('Betweenness Centrality Dynamics During Problem Solving\n' +
'Objects with sustained high betweenness are selected for correspondences',
fontsize=13, fontweight='bold')
ax.legend(fontsize=10, loc='upper left')
ax.grid(True, alpha=0.3)
ax.set_xlim([0, 30])
ax.set_ylim([0, 90])
# Add annotations
ax.axvspan(0, 10, alpha=0.1, color='yellow', label='Structure building')
ax.axvspan(10, 20, alpha=0.1, color='green', label='Correspondence formation')
ax.axvspan(20, 30, alpha=0.1, color='blue', label='Convergence')
ax.text(5, 85, 'Structure\nbuilding', fontsize=10, ha='center',
bbox=dict(boxstyle='round', facecolor='yellow', alpha=0.5))
ax.text(15, 85, 'Correspondence\nformation', fontsize=10, ha='center',
bbox=dict(boxstyle='round', facecolor='lightgreen', alpha=0.5))
ax.text(25, 85, 'Convergence', fontsize=10, ha='center',
bbox=dict(boxstyle='round', facecolor='lightblue', alpha=0.5))
# Add correlation annotation
ax.text(0.98, 0.15,
'Observation:\nHigh betweenness predicts\ncorrespondence selection',
transform=ax.transAxes, fontsize=11,
verticalalignment='bottom', horizontalalignment='right',
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.8))
plt.tight_layout()
plt.savefig('figure5_betweenness_dynamics.pdf', dpi=300, bbox_inches='tight')
plt.savefig('figure5_betweenness_dynamics.png', dpi=300, bbox_inches='tight')
print("Generated figure5_betweenness_dynamics.pdf and .png")
plt.close()

105
README.md
View File

@ -1,3 +1,104 @@
# copycat
Copycat
=========
Open Source copycat (python)
![GUI](https://i.imgur.com/AhhpzVQ.png)
An implementation of [Douglas Hofstadter](http://prelectur.stanford.edu/lecturers/hofstadter/) and [Melanie Mitchell](https://melaniemitchell.me/)'s Copycat algorithm.
The Copycat algorithm is explained [on Wikipedia](https://en.wikipedia.org/wiki/Copycat_%28software%29), in Melanie Mitchell's Book [Analogy-making as perception](https://www.amazon.com/Analogy-Making-Perception-Computer-Modeling-Connectionism/dp/026251544X/ref=sr_1_5?crid=1FC76DCS33513&dib=eyJ2IjoiMSJ9.TQVbRbFf696j7ZYj_sb4tIM3ZbFbuCIdtdYCy-Mq3EmJI6xbG5hhVXuyOPjeb7E4b8jhKiJlfr6NnD_O09rEEkNMwD_1zFxkLT9OkF81RSFL4kMCLOT7K-7KnPwBFbrc9tZuhLKFOWbxMGNL75koMcetQl2Lf6V7xsNYLYLCHBlXMCrusJ88Kv3Y8jiPKwrEr1hUwhWB8vtwEG9vSYXU7Gw-b4fZRNNbUtBBWNwiK3k.IJZZ8kA_QirWQK1ax5i42zD2nV7XvKoPYRgN94en4Dc&dib_tag=se&keywords=melanie+mitchell&qid=1745436638&sprefix=melanie+mitchell%2Caps%2C206&sr=8-5#), and in [this paper](https://github.com/Alex-Linhares/FARGonautica/blob/master/Literature/Foundations-Chalmers.French.and.Hofstadter-1992-Journal%20of%20Experimental%20and%20Theoretical%20Artificial%20Intelligence.pdf). The wikipedia page has additional links for deeper reading. See also [FARGonautica](https://github.com/Alex-Linhares/Fargonautica), where a collection of Fluid Concepts projects are available.
This implementation is a copycat of Scott Boland's [Java implementation](https://archive.org/details/JavaCopycat).
The original Java-to-Python translation work was done by J Alan Brogan (@jalanb on GitHub).
The Java version has a GUI similar to the original Lisp; this Python version has no GUI code built in but can be incorporated into a larger GUI program.
J. Alan Brogan writes:
> In cases where I could not grok the Java implementation easily, I took ideas from the
> [LISP implementation](http://web.cecs.pdx.edu/~mm/how-to-get-copycat.html), or directly
> from [Melanie Mitchell](https://en.wikipedia.org/wiki/Melanie_Mitchell)'s book
> "[Analogy-Making as Perception](http://www.amazon.com/Analogy-Making-Perception-Computer-Melanie-Mitchell/dp/0262132893/ref=tmm_hrd_title_0?ie=UTF8&qid=1351269085&sr=1-3)".
Running the command-line program
--------------------------------
To clone the repo locally, run these commands:
```
$ git clone https://github.com/fargonauts/copycat.git
$ cd copycat/copycat
$ python main.py abc abd ppqqrr --iterations 10
```
The script takes three or four arguments.
The first two are a pair of strings with some change, for example "abc" and "abd".
The third is a string which the script should try to change analogously.
The fourth (which defaults to "1") is a number of iterations.
This might produce output such as
```
ppqqss: 6 (avg time 869.0, avg temp 23.4)
ppqqrs: 4 (avg time 439.0, avg temp 37.3)
```
The first number indicates how many times Copycat chose that string as its answer; higher means "more obvious".
The last number indicates the average final temperature of the workspace; lower means "more elegant".
Code structure
---------------------
This Copycat system consists of 4,981 lines of Python code across 40 files. Here's a breakdown.
Core Components:
- codeletMethods.py: 1,124 lines (largest file)
- curses_reporter.py: 436 lines
- coderack.py: 310 lines
- slipnet.py: 248 lines
Workspace Components:
- group.py: 237 lines
- bond.py: 211 lines
- correspondence.py: 204 lines
- workspace.py: 195 lines
- workspaceObject.py: 194 lines
Control Components:
- temperature.py: 175 lines
- conceptMapping.py: 153 lines
- rule.py: 149 lines
- copycat.py: 144 lines
GUI Components:
- gui/gui.py: 96 lines
- gui/workspacecanvas.py: 70 lines
- gui/status.py: 66 lines
- gui/control.py: 59 lines
The system is well-organized with clear separation of concerns:
- Core logic (codelets, coderack, slipnet)
- Workspace management (groups, bonds, correspondences)
- Control systems (temperature, rules)
- User interface (GUI components)
The largest file, codeletMethods.py, contains all the codelet behavior implementations, which makes sense as it's the heart of the system's analogical reasoning capabilities.
{code.py}README.md Files
---------------------
We've got an LLM to document every code file, so people can look at a particular readme before delving into the work (Here's one [Example](main_README.md)).
Installing the module
---------------------
To install the Python module and get started with it, run these commands:
```
$ pip install -e git+git://github.com/fargonauts/copycat.git#egg=copycat
$ python
>>> from copycat import Copycat
>>> Copycat().run('abc', 'abd', 'ppqqrr', 10)
{'ppqqrs': {'count': 4, 'avgtime': 439, 'avgtemp': 37.3}, 'ppqqss': {'count': 6, 'avgtime': 869, 'avgtemp': 23.4}}
```
The result of `run` is a dict containing the same information as was printed by `main.py` above.

4
copycat/__init__.py Normal file
View File

@ -0,0 +1,4 @@
from .copycat import Copycat, Reporter # noqa
from .problem import Problem
from .plot import plot_answers
from .io import save_answers

211
copycat/bond.py Normal file
View File

@ -0,0 +1,211 @@
from .workspaceStructure import WorkspaceStructure
class Bond(WorkspaceStructure):
# pylint: disable=too-many-arguments
def __init__(self, ctx, source, destination, bondCategory, bondFacet,
sourceDescriptor, destinationDescriptor):
WorkspaceStructure.__init__(self, ctx)
slipnet = self.ctx.slipnet
self.source = source
self.string = self.source.string
self.destination = destination
self.leftObject = self.source
self.rightObject = self.destination
self.directionCategory = slipnet.right
if self.source.leftIndex > self.destination.rightIndex:
self.leftObject = self.destination
self.rightObject = self.source
self.directionCategory = slipnet.left
self.facet = bondFacet
self.sourceDescriptor = sourceDescriptor
self.destinationDescriptor = destinationDescriptor
self.category = bondCategory
if (self.sourceDescriptor == self.destinationDescriptor):
self.directionCategory = None
def flippedVersion(self):
slipnet = self.ctx.slipnet
return Bond(
self.ctx,
self.destination, self.source,
self.category.getRelatedNode(slipnet.opposite),
self.facet, self.destinationDescriptor, self.sourceDescriptor
)
def __repr__(self):
return '<Bond: %s>' % self.__str__()
def __str__(self):
return '%s bond between %s and %s' % (
self.category.name, self.leftObject, self.rightObject,
)
def buildBond(self):
workspace = self.ctx.workspace
workspace.structures += [self]
self.string.bonds += [self]
self.category.buffer = 100.0
if self.directionCategory:
self.directionCategory.buffer = 100.0
self.leftObject.rightBond = self
self.rightObject.leftBond = self
self.leftObject.bonds += [self]
self.rightObject.bonds += [self]
def break_the_structure(self):
self.breakBond()
def breakBond(self):
workspace = self.ctx.workspace
if self in workspace.structures:
workspace.structures.remove(self)
if self in self.string.bonds:
self.string.bonds.remove(self)
self.leftObject.rightBond = None
self.rightObject.leftBond = None
if self in self.leftObject.bonds:
self.leftObject.bonds.remove(self)
if self in self.rightObject.bonds:
self.rightObject.bonds.remove(self)
def getIncompatibleCorrespondences(self):
# returns a list of correspondences that are incompatible with
# self bond
workspace = self.ctx.workspace
incompatibles = []
if self.leftObject.leftmost and self.leftObject.correspondence:
correspondence = self.leftObject.correspondence
if self.string == workspace.initial:
objekt = self.leftObject.correspondence.objectFromTarget
else:
objekt = self.leftObject.correspondence.objectFromInitial
if objekt.leftmost and objekt.rightBond:
if (
objekt.rightBond.directionCategory and
objekt.rightBond.directionCategory != self.directionCategory
):
incompatibles += [correspondence]
if self.rightObject.rightmost and self.rightObject.correspondence:
correspondence = self.rightObject.correspondence
if self.string == workspace.initial:
objekt = self.rightObject.correspondence.objectFromTarget
else:
objekt = self.rightObject.correspondence.objectFromInitial
if objekt.rightmost and objekt.leftBond:
if (
objekt.leftBond.directionCategory and
objekt.leftBond.directionCategory != self.directionCategory
):
incompatibles += [correspondence]
return incompatibles
def updateInternalStrength(self):
slipnet = self.ctx.slipnet
# bonds between objects of same type(ie. letter or group) are
# stronger than bonds between different types
sourceGap = self.source.leftIndex != self.source.rightIndex
destinationGap = (self.destination.leftIndex !=
self.destination.rightIndex)
if sourceGap == destinationGap:
memberCompatibility = 1.0
else:
memberCompatibility = 0.7
# letter category bonds are stronger
if self.facet == slipnet.letterCategory:
facetFactor = 1.0
else:
facetFactor = 0.7
strength = min(100.0, memberCompatibility * facetFactor *
self.category.bondDegreeOfAssociation())
self.internalStrength = strength
def updateExternalStrength(self):
self.externalStrength = 0.0
supporters = self.numberOfLocalSupportingBonds()
if supporters > 0.0:
density = self.localDensity() / 100.0
density = density ** 0.5 * 100.0
supportFactor = 0.6 ** (1.0 / supporters ** 3)
supportFactor = max(1.0, supportFactor)
strength = supportFactor * density
self.externalStrength = strength
def numberOfLocalSupportingBonds(self):
return sum(
1 for b in self.string.bonds if
b.string == self.source.string and
self.leftObject.letterDistance(b.leftObject) != 0 and
self.rightObject.letterDistance(b.rightObject) != 0 and
self.category == b.category and
self.directionCategory == b.directionCategory
)
def sameCategories(self, other):
return (self.category == other.category and
self.directionCategory == other.directionCategory)
def myEnds(self, object1, object2):
if self.source == object1 and self.destination == object2:
return True
return self.source == object2 and self.destination == object1
def localDensity(self):
# returns a rough measure of the density in the string
# of the same bond-category and the direction-category of
# the given bond
workspace = self.ctx.workspace
slotSum = 0.0
supportSum = 0.0
for object1 in workspace.objects:
if object1.string == self.string:
for object2 in workspace.objects:
if object1.beside(object2):
slotSum += 1.0
for bond in self.string.bonds:
if (
bond != self and
self.sameCategories(bond) and
self.myEnds(object1, object2)
):
supportSum += 1.0
try:
return 100.0 * supportSum / slotSum
except ZeroDivisionError:
return 0.0
def sameNeighbors(self, other):
if self.leftObject == other.leftObject:
return True
return self.rightObject == other.rightObject
def getIncompatibleBonds(self):
return [b for b in self.string.bonds if self.sameNeighbors(b)]
def set_source(self, value):
self.source = value
def possibleGroupBonds(self, bonds):
result = []
slipnet = self.ctx.slipnet
for bond in bonds:
if (
bond.category == self.category and
bond.directionCategory == self.directionCategory
):
result += [bond]
else:
# a modified bond might be made
if bond.category == self.category:
return [] # a different bond cannot be made here
if bond.directionCategory == self.directionCategory:
return [] # a different bond cannot be made here
if slipnet.sameness in [self.category, bond.category]:
return []
bond = Bond(
bond.ctx, bond.destination, bond.source, self.category,
self.facet, bond.destinationDescriptor,
bond.sourceDescriptor
)
result += [bond]
return result

54
copycat/bond_README.md Normal file
View File

@ -0,0 +1,54 @@
# README_bond.md
## Overview
`bond.py` implements the Bond system, a key component of the Copycat system that manages the relationships between objects in strings. It handles the creation, evaluation, and management of bonds that represent meaningful connections between objects based on their properties and relationships.
## Core Components
- `Bond` class: Main class that represents a bond between objects
- Bond evaluation system
- Bond compatibility management
## Key Features
- Manages bonds between objects in strings
- Evaluates bond strength based on multiple factors
- Handles bond direction and category
- Supports bond flipping and versioning
- Manages bond compatibility and support
## Bond Components
- `source`: Source object of the bond
- `destination`: Destination object of the bond
- `category`: Category of the bond
- `facet`: Aspect of the bond
- `directionCategory`: Direction of the bond
- `sourceDescriptor`: Descriptor of the source object
- `destinationDescriptor`: Descriptor of the destination object
## Main Methods
- `updateInternalStrength()`: Calculate internal bond strength
- `updateExternalStrength()`: Calculate external bond strength
- `buildBond()`: Create and establish bond
- `breakBond()`: Remove bond
- `localSupport()`: Calculate local support
- `numberOfLocalSupportingBonds()`: Count supporting bonds
- `sameCategories()`: Compare bond categories
- `localDensity()`: Calculate local bond density
## Bond Types
- Letter category bonds
- Direction-based bonds
- Category-based bonds
- Flipped bonds
- Modified bonds
## Dependencies
- Requires `workspaceStructure` module
- Used by the main `copycat` module
## Notes
- Bonds are evaluated based on member compatibility and facet factors
- The system supports both same-type and different-type bonds
- Bonds can have direction categories (left, right)
- The system handles bond compatibility and support
- Bonds can be flipped to create alternative versions
- Local density and support factors influence bond strength

9
copycat/codelet.py Normal file
View File

@ -0,0 +1,9 @@
class Codelet(object):
def __init__(self, name, urgency, arguments, currentTime):
self.name = name
self.urgency = urgency
self.arguments = arguments
self.birthdate = currentTime
def __repr__(self):
return '<Codelet: %s>' % self.name

2012
copycat/codeletMethods.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,51 @@
# Codelet Methods
## Overview
The codelet methods system is a core component of the Copycat architecture that implements the various operations and behaviors that codelets can perform. This system defines the actual implementation of codelet behaviors that drive the analogical reasoning process.
## Key Components
- Codelet operation implementations
- Behavior definitions
- Action handlers
- State management
- Event processing
## Codelet Types
1. **Workspace Codelets**
- Object creation and modification
- Structure building
- Relationship formation
- Group management
2. **Slipnet Codelets**
- Concept activation
- Node management
- Link formation
- Activation spreading
3. **Correspondence Codelets**
- Mapping creation
- Relationship matching
- Structure alignment
- Similarity assessment
## Usage
Codelet methods are called by the coderack system when codelets are executed:
```python
# Example of a codelet method implementation
def some_codelet_method(workspace, slipnet, coderack):
# Perform operations
# Update state
# Create new codelets
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Coderack: Manages codelet execution
- Workspace: Provides objects to operate on
- Slipnet: Provides conceptual knowledge
- Correspondence: Manages mappings between structures

98
copycat/codelet_README.md Normal file
View File

@ -0,0 +1,98 @@
# Codelet System
## Overview
The codelet system is a fundamental component of the Copycat architecture that defines the basic structure and behavior of codelets. Codelets are small, specialized agents that perform specific operations in the workspace, forming the basis of the system's parallel processing capabilities.
## Key Features
- Codelet structure
- Behavior definition
- Priority management
- State tracking
- Execution control
## Codelet Types
1. **Basic Codelets**
- Scout codelets
- Builder codelets
- Evaluator codelets
- Breaker codelets
2. **Specialized Codelets**
- Group codelets
- Bond codelets
- Correspondence codelets
- Rule codelets
3. **Control Codelets**
- Temperature codelets
- Pressure codelets
- Urgency codelets
- Cleanup codelets
## Usage
Codelets are created and managed through the codelet system.
The behavior parameter in codelet.set_behavior would typically be a reference to one of the **many codelet methods defined in codeletMethods.py**.
For example, a codelet might be assigned a behavior like:
- build_group - to create a new group of related elements
- evaluate_bond - to assess the strength of a bond
- scout_for_correspondence - to look for potential mappings between structures
The behavior is what gives each codelet its purpose and role in the system's parallel processing architecture. When the codelet is executed via codelet.run(), it performs this assigned behavior in the context of the current workspace state.
```python
# Create a codelet
codelet = Codelet(name, priority)
# Set behavior
codelet.set_behavior(behavior)
# Execute codelet
result = codelet.run()
```
## Codelet Decorator
The `@codelet` decorator is defined in `codeletMethods.py` and is used to mark functions as codelet behaviors. Here's its implementation:
```python
def codelet(name):
"""Decorator for otherwise-unused functions that are in fact used as codelet behaviors"""
def wrap(f):
# Verify that the decorated function has exactly two parameters:
# 1. ctx - the context object containing workspace, slipnet, etc.
# 2. codelet - the codelet instance itself
# The None values in the tuple represent: no default args, no *args, no **kwargs
assert tuple(inspect.getargspec(f)) == (['ctx', 'codelet'], None, None, None)
# Mark this function as a valid codelet method
f.is_codelet_method = True
# Store the codelet type name for reference
f.codelet_name = name
return f
return wrap
```
The decorator:
1. Takes a `name` parameter that identifies the type of codelet
2. Wraps the decorated function with additional metadata:
- Marks the function as a codelet method with `is_codelet_method = True`
- Stores the codelet name with `codelet_name = name`
3. Verifies that the decorated function has the correct signature (must take `ctx` and `codelet` parameters)
Example usage:
```python
@codelet('breaker')
def breaker(ctx, codelet):
# Codelet behavior implementation
pass
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Coderack: Manages codelet execution
- CodeletMethods: Provides codelet behaviors
- Workspace: Environment for codelets
- Temperature: Influences codelet behavior

310
copycat/coderack.py Normal file
View File

@ -0,0 +1,310 @@
import math
import logging
from . import codeletMethods
from .bond import Bond
from .codelet import Codelet
from .correspondence import Correspondence
from .description import Description
from .group import Group
from .rule import Rule
NUMBER_OF_BINS = 7
def getUrgencyBin(urgency):
i = int(urgency) * NUMBER_OF_BINS / 100
if i >= NUMBER_OF_BINS:
return NUMBER_OF_BINS
return i + 1
class Coderack(object):
def __init__(self, ctx):
self.ctx = ctx
self.reset()
self.methods = {}
for name in dir(codeletMethods):
method = getattr(codeletMethods, name)
if getattr(method, 'is_codelet_method', False):
self.methods[method.codelet_name] = method
assert set(self.methods.keys()) == set([
'breaker',
'bottom-up-description-scout',
'top-down-description-scout',
'description-strength-tester',
'description-builder',
'bottom-up-bond-scout',
'top-down-bond-scout--category',
'top-down-bond-scout--direction',
'bond-strength-tester',
'bond-builder',
'top-down-group-scout--category',
'top-down-group-scout--direction',
'group-scout--whole-string',
'group-strength-tester',
'group-builder',
'replacement-finder',
'rule-scout',
'rule-strength-tester',
'rule-builder',
'rule-translator',
'bottom-up-correspondence-scout',
'important-object-correspondence-scout',
'correspondence-strength-tester',
'correspondence-builder',
])
def reset(self):
self.codelets = []
self.codeletsRun = 0
def updateCodelets(self):
if self.codeletsRun > 0:
self.postTopDownCodelets()
self.postBottomUpCodelets()
def probabilityOfPosting(self, codeletName):
# TODO: use entropy
temperature = self.ctx.temperature
workspace = self.ctx.workspace
if codeletName == 'breaker':
return 1.0
if 'replacement' in codeletName:
if workspace.numberOfUnreplacedObjects() > 0:
return 1.0
return 0.0
if 'rule' in codeletName:
if not workspace.rule:
return 1.0
return workspace.rule.totalWeakness() / 100.0
if 'correspondence' in codeletName:
return workspace.interStringUnhappiness / 100.0
if 'description' in codeletName:
# TODO: use entropy
return (temperature.value() / 100.0) ** 2
return workspace.intraStringUnhappiness / 100.0
def howManyToPost(self, codeletName):
random = self.ctx.random
workspace = self.ctx.workspace
if codeletName == 'breaker' or 'description' in codeletName:
return 1
if 'translator' in codeletName:
if not workspace.rule:
return 0
return 1
if 'rule' in codeletName:
return 2
if 'group' in codeletName and not workspace.numberOfBonds():
return 0
if 'replacement' in codeletName and workspace.rule:
return 0
number = 0
if 'bond' in codeletName:
number = workspace.numberOfUnrelatedObjects()
if 'group' in codeletName:
number = workspace.numberOfUngroupedObjects()
if 'replacement' in codeletName:
number = workspace.numberOfUnreplacedObjects()
if 'correspondence' in codeletName:
number = workspace.numberOfUncorrespondingObjects()
if number < random.sqrtBlur(2.0):
return 1
if number < random.sqrtBlur(4.0):
return 2
return 3
def post(self, codelet):
self.codelets += [codelet]
if len(self.codelets) > 100:
oldCodelet = self.chooseOldCodelet()
self.removeCodelet(oldCodelet)
def postTopDownCodelets(self):
random = self.ctx.random
slipnet = self.ctx.slipnet
for node in slipnet.slipnodes:
if node.activation != 100.0:
continue
for codeletName in node.codelets:
probability = self.probabilityOfPosting(codeletName)
howMany = self.howManyToPost(codeletName)
for _ in range(howMany):
if not random.coinFlip(probability):
continue
urgency = getUrgencyBin(
node.activation * node.conceptualDepth / 100.0)
codelet = Codelet(codeletName, urgency, [node], self.codeletsRun)
logging.info('Post top down: %s, with urgency: %d',
codelet.name, urgency)
self.post(codelet)
def postBottomUpCodelets(self):
logging.info("posting bottom up codelets")
self.__postBottomUpCodelets('bottom-up-description-scout')
self.__postBottomUpCodelets('bottom-up-bond-scout')
self.__postBottomUpCodelets('group-scout--whole-string')
self.__postBottomUpCodelets('bottom-up-correspondence-scout')
self.__postBottomUpCodelets('important-object-correspondence-scout')
self.__postBottomUpCodelets('replacement-finder')
self.__postBottomUpCodelets('rule-scout')
self.__postBottomUpCodelets('rule-translator')
self.__postBottomUpCodelets('breaker')
def __postBottomUpCodelets(self, codeletName):
random = self.ctx.random
# TODO: use entropy
temperature = self.ctx.temperature
probability = self.probabilityOfPosting(codeletName)
howMany = self.howManyToPost(codeletName)
urgency = 3
if codeletName == 'breaker':
urgency = 1
# TODO: use entropy
if temperature.value() < 25.0 and 'translator' in codeletName:
urgency = 5
for _ in range(howMany):
if random.coinFlip(probability):
codelet = Codelet(codeletName, urgency, [], self.codeletsRun)
self.post(codelet)
def removeCodelet(self, codelet):
self.codelets.remove(codelet)
def newCodelet(self, name, strength, arguments):
urgency = getUrgencyBin(strength)
newCodelet = Codelet(name, urgency, arguments, self.codeletsRun)
self.post(newCodelet)
# pylint: disable=too-many-arguments
def proposeRule(self, facet, description, category, relation):
"""Creates a proposed rule, and posts a rule-strength-tester codelet.
The new codelet has urgency a function of
the degree of conceptual-depth of the descriptions in the rule
"""
rule = Rule(self.ctx, facet, description, category, relation)
rule.updateStrength()
if description and relation:
averageDepth = (description.conceptualDepth + relation.conceptualDepth) / 2.0
urgency = 100.0 * math.sqrt(averageDepth / 100.0)
else:
urgency = 0
self.newCodelet('rule-strength-tester', urgency, [rule])
def proposeCorrespondence(self, initialObject, targetObject,
conceptMappings, flipTargetObject):
correspondence = Correspondence(self.ctx, initialObject, targetObject,
conceptMappings, flipTargetObject)
for mapping in conceptMappings:
mapping.initialDescriptionType.buffer = 100.0
mapping.initialDescriptor.buffer = 100.0
mapping.targetDescriptionType.buffer = 100.0
mapping.targetDescriptor.buffer = 100.0
mappings = correspondence.distinguishingConceptMappings()
urgency = sum(mapping.strength() for mapping in mappings)
numberOfMappings = len(mappings)
if urgency:
urgency /= numberOfMappings
binn = getUrgencyBin(urgency)
logging.info('urgency: %s, number: %d, bin: %d',
urgency, numberOfMappings, binn)
self.newCodelet('correspondence-strength-tester',
urgency, [correspondence])
def proposeDescription(self, objekt, type_, descriptor):
description = Description(objekt, type_, descriptor)
descriptor.buffer = 100.0
urgency = type_.activation
self.newCodelet('description-strength-tester',
urgency, [description])
def proposeSingleLetterGroup(self, source):
slipnet = self.ctx.slipnet
self.proposeGroup([source], [], slipnet.samenessGroup, None,
slipnet.letterCategory)
def proposeGroup(self, objects, bondList, groupCategory, directionCategory,
bondFacet):
slipnet = self.ctx.slipnet
bondCategory = groupCategory.getRelatedNode(slipnet.bondCategory)
bondCategory.buffer = 100.0
if directionCategory:
directionCategory.buffer = 100.0
group = Group(objects[0].string, groupCategory, directionCategory,
bondFacet, objects, bondList)
urgency = bondCategory.bondDegreeOfAssociation()
self.newCodelet('group-strength-tester', urgency, [group])
def proposeBond(self, source, destination, bondCategory, bondFacet,
sourceDescriptor, destinationDescriptor):
bondFacet.buffer = 100.0
sourceDescriptor.buffer = 100.0
destinationDescriptor.buffer = 100.0
bond = Bond(self.ctx, source, destination, bondCategory, bondFacet,
sourceDescriptor, destinationDescriptor)
urgency = bondCategory.bondDegreeOfAssociation()
self.newCodelet('bond-strength-tester', urgency, [bond])
def chooseOldCodelet(self):
# selects an old codelet to remove from the coderack
# more likely to select lower urgency codelets
urgencies = []
for codelet in self.codelets:
urgency = ((self.codeletsRun - codelet.birthdate) *
(7.5 - codelet.urgency))
urgencies += [urgency]
random = self.ctx.random
return random.weighted_choice(self.codelets, urgencies)
def postInitialCodelets(self):
workspace = self.ctx.workspace
n = len(workspace.objects)
if n == 0:
# The most pathological case.
codeletsToPost = [
('rule-scout', 1),
]
else:
codeletsToPost = [
('bottom-up-bond-scout', 2 * n),
('replacement-finder', 2 * n),
('bottom-up-correspondence-scout', 2 * n),
]
for name, count in codeletsToPost:
for _ in range(count):
codelet = Codelet(name, 1, [], self.codeletsRun)
self.post(codelet)
def chooseAndRunCodelet(self):
if not len(self.codelets):
# Indeed, this happens fairly often.
self.postInitialCodelets()
codelet = self.chooseCodeletToRun()
self.run(codelet)
def chooseCodeletToRun(self):
random = self.ctx.random
# TODO: use entropy
temperature = self.ctx.temperature
assert self.codelets
# TODO: use entropy
scale = (100.0 - temperature.value() + 10.0) / 15.0
chosen = random.weighted_choice(self.codelets, [codelet.urgency ** scale for codelet in self.codelets])
self.removeCodelet(chosen)
return chosen
def run(self, codelet):
methodName = codelet.name
self.codeletsRun += 1
method = self.methods[methodName]
try:
method(self.ctx, codelet)
except AssertionError:
pass

View File

@ -0,0 +1,49 @@
# README_coderack.md
## Overview
`coderack.py` implements the Coderack, a key component of the Copycat system that manages the execution of codelets (small, focused procedures) that drive the analogical reasoning process. It handles the posting, selection, and execution of codelets based on their urgency and the current state of the system.
## Core Components
- `Coderack` class: Main class that manages codelet execution
- Codelet management system
- Urgency-based codelet selection
## Key Features
- Manages a collection of codelets (small procedures)
- Implements urgency-based codelet selection
- Supports both top-down and bottom-up codelet posting
- Handles codelet execution and removal
- Manages rule, correspondence, description, and group proposals
## Codelet Types
- Breaker codelets
- Description codelets (top-down and bottom-up)
- Bond codelets (top-down and bottom-up)
- Group codelets (top-down and whole-string)
- Rule codelets (scout, strength-tester, builder, translator)
- Correspondence codelets (bottom-up and important-object)
- Replacement finder codelets
## Main Methods
- `post(codelet)`: Add a codelet to the coderack
- `chooseAndRunCodelet()`: Select and execute a codelet
- `postTopDownCodelets()`: Post codelets from activated slipnet nodes
- `postBottomUpCodelets()`: Post codelets based on workspace state
- `proposeRule()`: Create and post a rule proposal
- `proposeCorrespondence()`: Create and post a correspondence proposal
- `proposeDescription()`: Create and post a description proposal
- `proposeGroup()`: Create and post a group proposal
- `proposeBond()`: Create and post a bond proposal
## Dependencies
- Requires `codeletMethods` module
- Uses `bond`, `codelet`, `correspondence`, `description`, `group`, and `rule` modules
- Uses `logging` for debug output
- Uses `math` for urgency calculations
## Notes
- Codelets are organized by urgency bins
- The system maintains a maximum of 100 codelets
- Codelet selection is influenced by temperature and workspace state
- The system supports both deterministic and probabilistic codelet posting
- Codelet urgency is calculated based on various factors including conceptual depth

153
copycat/conceptMapping.py Normal file
View File

@ -0,0 +1,153 @@
class ConceptMapping(object):
def __init__(self, initialDescriptionType, targetDescriptionType,
initialDescriptor, targetDescriptor,
initialObject, targetObject):
self.slipnet = initialDescriptionType.slipnet
self.initialDescriptionType = initialDescriptionType
self.targetDescriptionType = targetDescriptionType
self.initialDescriptor = initialDescriptor
self.targetDescriptor = targetDescriptor
self.initialObject = initialObject
self.targetObject = targetObject
self.label = initialDescriptor.getBondCategory(targetDescriptor)
def __repr__(self):
return '<ConceptMapping: %s from %s to %s>' % (
self.__str__(), self.initialDescriptor, self.targetDescriptor)
def __str__(self):
return self.label.name if self.label else 'anonymous'
def slippability(self):
association = self.__degreeOfAssociation()
if association == 100.0:
return 100.0
depth = self.__conceptualDepth() / 100.0
return association * (1 - depth * depth)
def __degreeOfAssociation(self):
# Assumes the 2 descriptors are connected in the slipnet by <= 1 link
if self.initialDescriptor == self.targetDescriptor:
return 100.0
for link in self.initialDescriptor.lateralSlipLinks:
if link.destination == self.targetDescriptor:
return link.degreeOfAssociation()
return 0.0
def strength(self):
association = self.__degreeOfAssociation()
if association == 100.0:
return 100.0
depth = self.__conceptualDepth() / 100.0
return association * (1 + depth * depth)
def __conceptualDepth(self):
return (self.initialDescriptor.conceptualDepth +
self.targetDescriptor.conceptualDepth) / 2.0
def distinguishing(self):
slipnet = self.slipnet
if self.initialDescriptor == slipnet.whole:
if self.targetDescriptor == slipnet.whole:
return False
if not self.initialObject.distinguishingDescriptor(
self.initialDescriptor):
return False
return self.targetObject.distinguishingDescriptor(
self.targetDescriptor)
def sameInitialType(self, other):
return self.initialDescriptionType == other.initialDescriptionType
def sameTargetType(self, other):
return self.targetDescriptionType == other.targetDescriptionType
def sameTypes(self, other):
return self.sameInitialType(other) and self.sameTargetType(other)
def sameInitialDescriptor(self, other):
return self.initialDescriptor == other.initialDescriptor
def sameTargetDescriptor(self, other):
return self.targetDescriptor == other.targetDescriptor
def sameDescriptors(self, other):
if self.sameInitialDescriptor(other):
return self.sameTargetDescriptor(other)
return False
def sameKind(self, other):
return self.sameTypes(other) and self.sameDescriptors(other)
def nearlySameKind(self, other):
return self.sameTypes(other) and self.sameInitialDescriptor(other)
def isContainedBy(self, mappings):
return any(self.sameKind(mapping) for mapping in mappings)
def isNearlyContainedBy(self, mappings):
return any(self.nearlySameKind(mapping) for mapping in mappings)
def related(self, other):
if self.initialDescriptor.related(other.initialDescriptor):
return True
return self.targetDescriptor.related(other.targetDescriptor)
def incompatible(self, other):
# Concept-mappings (a -> b) and (c -> d) are incompatible if a is
# related to c or if b is related to d, and the a -> b relationship is
# different from the c -> d relationship. E.g., rightmost -> leftmost
# is incompatible with right -> right, since rightmost is linked
# to right, but the relationships (opposite and identity) are different
# Notice that slipnet distances are not looked at, only slipnet links.
# This should be changed eventually.
if not self.related(other):
return False
if not self.label or not other.label:
return False
return self.label != other.label
def supports(self, other):
# Concept-mappings (a -> b) and (c -> d) support each other if a is
# related to c and if b is related to d and the a -> b relationship is
# the same as the c -> d relationship. E.g., rightmost -> rightmost
# supports right -> right and leftmost -> leftmost.
# Notice that slipnet distances are not looked at, only slipnet links.
# This should be changed eventually.
# If the two concept-mappings are the same, then return t. This
# means that letter->group supports letter->group, even though these
# concept-mappings have no label.
if self.sameDescriptors(other):
return True
# if the descriptors are not related return false
if not self.related(other):
return False
if not self.label or not other.label:
return False
return self.label == other.label
def relevant(self):
if self.initialDescriptionType.fully_active():
return self.targetDescriptionType.fully_active()
return False
def slippage(self):
slipnet = self.slipnet
return self.label not in [slipnet.sameness, slipnet.identity]
def symmetricVersion(self):
if not self.slippage():
return self
bond = self.targetDescriptor.getBondCategory(self.initialDescriptor)
if bond == self.label:
return self
return ConceptMapping(
self.targetDescriptionType,
self.initialDescriptionType,
self.targetDescriptor,
self.initialDescriptor1,
self.initialObject,
self.targetObject
)

View File

@ -0,0 +1,54 @@
# Concept Mapping System
## Overview
The concept mapping system is a crucial component of the Copycat architecture that manages the mapping between concepts and their representations in the workspace. This system handles the translation between abstract concepts and concrete instances.
## Key Features
- Concept-to-instance mapping
- Mapping validation
- Relationship tracking
- State management
- Event handling
## Mapping Types
1. **Direct Mappings**
- Letter-to-concept mappings
- Number-to-concept mappings
- Symbol-to-concept mappings
- Pattern-to-concept mappings
2. **Structural Mappings**
- Group-to-concept mappings
- Bond-to-concept mappings
- Hierarchy-to-concept mappings
- Pattern-to-concept mappings
3. **Special Mappings**
- Rule-to-concept mappings
- Context-to-concept mappings
- Meta-concept mappings
- Derived mappings
## Usage
Concept mappings are created and managed through the mapping system:
```python
# Create a new concept mapping
mapping = ConceptMapping(source, target)
# Access mapping properties
strength = mapping.get_strength()
# Modify mapping state
mapping.set_strength(new_value)
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Slipnet: Provides concepts to map
- Workspace: Provides instances to map to
- Codelets: Operate on mappings
- Correspondence: Manages mapping relationships

144
copycat/copycat.py Normal file
View File

@ -0,0 +1,144 @@
from .coderack import Coderack
from .randomness import Randomness
from .slipnet import Slipnet
from .temperature import Temperature
from .workspace import Workspace
from .gui import GUI
from pprint import pprint
class Reporter(object):
"""Do-nothing base class for defining new reporter types"""
def report_answer(self, answer):
pass
def report_coderack(self, coderack):
pass
def report_slipnet(self, slipnet):
pass
def report_temperature(self, temperature): #TODO: use entropy
pass
def report_workspace(self, workspace):
pass
class Copycat(object):
def __init__(self, rng_seed=None, reporter=None, gui=False):
self.coderack = Coderack(self)
self.random = Randomness(rng_seed)
self.slipnet = Slipnet()
self.temperature = Temperature() # TODO: use entropy
self.workspace = Workspace(self)
self.reporter = reporter or Reporter()
if gui:
self.gui = GUI('Copycat')
self.lastUpdate = float('-inf')
def step(self):
self.coderack.chooseAndRunCodelet()
self.reporter.report_coderack(self.coderack)
self.reporter.report_temperature(self.temperature)
self.reporter.report_workspace(self.workspace)
def update_workspace(self, currentTime):
self.workspace.updateEverything()
self.coderack.updateCodelets()
self.slipnet.update(self.random)
self.temperature.update(self.workspace.getUpdatedTemperature())
self.lastUpdate = currentTime
self.reporter.report_slipnet(self.slipnet)
def check_reset(self):
if self.gui.app.primary.control.go:
initial, modified, target = self.gui.app.primary.control.get_vars()
self.gui.app.reset_with_strings(initial, modified, target)
self.workspace.resetWithStrings(initial, modified, target)
return True
else:
return False
def mainLoop(self):
currentTime = self.coderack.codeletsRun
self.temperature.tryUnclamp(currentTime)
# Every 5 codelets, we update the workspace.
if currentTime >= self.lastUpdate + 5:
self.update_workspace(currentTime)
self.step()
def runTrial(self):
"""Run a trial of the copycat algorithm"""
self.coderack.reset()
self.slipnet.reset()
self.temperature.reset() # TODO: use entropy
self.workspace.reset()
while self.workspace.finalAnswer is None:
self.mainLoop()
answer = {
'answer': self.workspace.finalAnswer,
'temp': self.temperature.last_unclamped_value, # TODO: use entropy
'time': self.coderack.codeletsRun,
}
self.reporter.report_answer(answer)
return answer
def runGUI(self):
while not self.check_reset():
self.gui.update(self)
self.gui.refresh()
answers = {}
self.temperature.useAdj('pbest')
while True:
if self.check_reset():
answers = {}
self.gui.refresh()
if not self.gui.paused():
answer = self.runTrial()
self.gui.update(self)
d = answers.setdefault(answer['answer'], {
'count': 0,
'sumtemp': 0,
'sumtime': 0
})
d['count'] += 1
d['sumtemp'] += answer['temp']
d['sumtime'] += answer['time']
self.gui.add_answers(answers)
for answer, d in answers.items():
d['avgtemp'] = d.pop('sumtemp') / d['count']
d['avgtime'] = d.pop('sumtime') / d['count']
pprint(answers)
return answers
def run(self, initial, modified, target, iterations):
self.workspace.resetWithStrings(initial, modified, target)
answers = {}
formula = 'pbest'
self.temperature.useAdj(formula)
for i in range(iterations):
answer = self.runTrial()
d = answers.setdefault(answer['answer'], {
'count': 0,
'sumtemp': 0, # TODO: use entropy
'sumtime': 0
})
d['count'] += 1
d['sumtemp'] += answer['temp'] # TODO: use entropy
d['sumtime'] += answer['time']
for answer, d in answers.items():
d['avgtemp'] = d.pop('sumtemp') / d['count']
d['avgtime'] = d.pop('sumtime') / d['count']
print('The formula {} provided:'.format(formula))
print('Average difference: {}'.format(self.temperature.getAverageDifference()))
return answers
def run_forever(self, initial, modified, target):
self.workspace.resetWithStrings(initial, modified, target)
while True:
self.runTrial()

40
copycat/copycat_README.md Normal file
View File

@ -0,0 +1,40 @@
# README_copycat.md
## Overview
`copycat.py` is the core module of the Copycat system, implementing the main analogical reasoning algorithm. It coordinates the interaction between various components like the workspace, slipnet, coderack, and temperature system.
## Core Components
- `Copycat` class: Main class that orchestrates the analogical reasoning process
- `Reporter` class: Base class for defining different types of reporters (GUI, curses, etc.)
## Key Features
- Implements the main Copycat algorithm for analogical reasoning
- Manages the interaction between different system components
- Supports multiple interfaces (GUI, curses, command-line)
- Provides temperature-based control of the reasoning process
- Handles multiple iterations and answer collection
## Main Methods
- `run(initial, modified, target, iterations)`: Run the algorithm for a specified number of iterations
- `runGUI()`: Run the algorithm with graphical interface
- `run_forever(initial, modified, target)`: Run the algorithm continuously
- `runTrial()`: Run a single trial of the algorithm
- `step()`: Execute a single step of the algorithm
- `update_workspace(currentTime)`: Update all workspace components
## Dependencies
- Requires `coderack`, `randomness`, `slipnet`, `temperature`, and `workspace` modules
- Uses `pprint` for pretty printing results
- Optional GUI support through the `gui` module
## Usage
The module is typically used through one of the interface modules:
- `main.py` for command-line interface
- `gui.py` for graphical interface
- `curses_main.py` for terminal-based interface
## Notes
- The system uses a temperature-based control mechanism to guide the reasoning process
- Results include answer statistics, temperature, and time metrics
- The system supports both single-run and continuous operation modes
- The reporter system allows for flexible output handling

204
copycat/correspondence.py Normal file
View File

@ -0,0 +1,204 @@
from .conceptMapping import ConceptMapping
from .group import Group
from .letter import Letter
from .workspaceStructure import WorkspaceStructure
from . import formulas
class Correspondence(WorkspaceStructure):
def __init__(self, ctx, objectFromInitial, objectFromTarget,
conceptMappings, flipTargetObject):
WorkspaceStructure.__init__(self, ctx)
self.objectFromInitial = objectFromInitial
self.objectFromTarget = objectFromTarget
self.conceptMappings = conceptMappings
self.flipTargetObject = flipTargetObject
self.accessoryConceptMappings = []
def __repr__(self):
return '<%s>' % self.__str__()
def __str__(self):
return 'Correspondence between %s and %s' % (
self.objectFromInitial, self.objectFromTarget)
def distinguishingConceptMappings(self):
return [m for m in self.conceptMappings if m.distinguishing()]
def relevantDistinguishingConceptMappings(self):
return [m for m in self.conceptMappings
if m.distinguishing() and m.relevant()]
def extract_target_bond(self):
targetBond = False
if self.objectFromTarget.leftmost:
targetBond = self.objectFromTarget.rightBond
elif self.objectFromTarget.rightmost:
targetBond = self.objectFromTarget.leftBond
return targetBond
def extract_initial_bond(self):
initialBond = False
if self.objectFromInitial.leftmost:
initialBond = self.objectFromInitial.rightBond
elif self.objectFromInitial.rightmost:
initialBond = self.objectFromInitial.leftBond
return initialBond
def getIncompatibleBond(self):
slipnet = self.ctx.slipnet
initialBond = self.extract_initial_bond()
if not initialBond:
return None
targetBond = self.extract_target_bond()
if not targetBond:
return None
if initialBond.directionCategory and targetBond.directionCategory:
mapping = ConceptMapping(
slipnet.directionCategory,
slipnet.directionCategory,
initialBond.directionCategory,
targetBond.directionCategory,
None,
None
)
for m in self.conceptMappings:
if m.incompatible(mapping):
return targetBond
return None
def getIncompatibleCorrespondences(self):
workspace = self.ctx.workspace
return [o.correspondence for o in workspace.initial.objects
if o and self.incompatible(o.correspondence)]
def incompatible(self, other):
if not other:
return False
if self.objectFromInitial == other.objectFromInitial:
return True
if self.objectFromTarget == other.objectFromTarget:
return True
for mapping in self.conceptMappings:
for otherMapping in other.conceptMappings:
if mapping.incompatible(otherMapping):
return True
return False
def supporting(self, other):
if self == other:
return False
if self.objectFromInitial == other.objectFromInitial:
return False
if self.objectFromTarget == other.objectFromTarget:
return False
if self.incompatible(other):
return False
for mapping in self.distinguishingConceptMappings():
for otherMapping in other.distinguishingConceptMappings():
if mapping.supports(otherMapping):
return True
return False
def support(self):
workspace = self.ctx.workspace
if isinstance(self.objectFromInitial, Letter):
if self.objectFromInitial.spansString():
return 100.0
if isinstance(self.objectFromTarget, Letter):
if self.objectFromTarget.spansString():
return 100.0
total = sum(c.totalStrength for c in workspace.correspondences()
if self.supporting(c))
return min(total, 100.0)
def updateInternalStrength(self):
"""A function of how many concept mappings there are
Also considered: their strength and how well they cohere"""
distinguishingMappings = self.relevantDistinguishingConceptMappings()
numberOfConceptMappings = len(distinguishingMappings)
if numberOfConceptMappings < 1:
self.internalStrength = 0.0
return
totalStrength = sum(m.strength() for m in distinguishingMappings)
averageStrength = totalStrength / numberOfConceptMappings
if numberOfConceptMappings == 1.0:
numberOfConceptMappingsFactor = 0.8
elif numberOfConceptMappings == 2.0:
numberOfConceptMappingsFactor = 1.2
else:
numberOfConceptMappingsFactor = 1.6
if self.internallyCoherent():
internalCoherenceFactor = 2.5
else:
internalCoherenceFactor = 1.0
internalStrength = (averageStrength * internalCoherenceFactor *
numberOfConceptMappingsFactor)
self.internalStrength = min(internalStrength, 100.0)
def updateExternalStrength(self):
self.externalStrength = self.support()
def internallyCoherent(self):
"""Whether any pair of distinguishing mappings support each other"""
mappings = self.relevantDistinguishingConceptMappings()
for i in range(len(mappings)):
for j in range(len(mappings)):
if i != j:
if mappings[i].supports(mappings[j]):
return True
return False
def slippages(self):
mappings = [m for m in self.conceptMappings if m.slippage()]
mappings += [m for m in self.accessoryConceptMappings if m.slippage()]
return mappings
def reflexive(self):
initial = self.objectFromInitial
if not initial.correspondence:
return False
if initial.correspondence.objectFromTarget == self.objectFromTarget:
return True
return False
def buildCorrespondence(self):
workspace = self.ctx.workspace
workspace.structures += [self]
if self.objectFromInitial.correspondence:
self.objectFromInitial.correspondence.breakCorrespondence()
if self.objectFromTarget.correspondence:
self.objectFromTarget.correspondence.breakCorrespondence()
self.objectFromInitial.correspondence = self
self.objectFromTarget.correspondence = self
# add mappings to accessory-concept-mapping-list
relevantMappings = self.relevantDistinguishingConceptMappings()
for mapping in relevantMappings:
if mapping.slippage():
self.accessoryConceptMappings += [mapping.symmetricVersion()]
if isinstance(self.objectFromInitial, Group):
if isinstance(self.objectFromTarget, Group):
bondMappings = formulas.getMappings(
self.objectFromInitial,
self.objectFromTarget,
self.objectFromInitial.bondDescriptions,
self.objectFromTarget.bondDescriptions
)
for mapping in bondMappings:
self.accessoryConceptMappings += [mapping]
if mapping.slippage():
self.accessoryConceptMappings += [
mapping.symmetricVersion()]
for mapping in self.conceptMappings:
if mapping.label:
mapping.label.activation = 100.0
def break_the_structure(self):
self.breakCorrespondence()
def breakCorrespondence(self):
workspace = self.ctx.workspace
workspace.structures.remove(self)
self.objectFromInitial.correspondence = None
self.objectFromTarget.correspondence = None

View File

@ -0,0 +1,51 @@
# README_correspondence.md
## Overview
`correspondence.py` implements the Correspondence system, a key component of the Copycat system that manages the mapping relationships between objects in the initial and target strings. It handles the creation, evaluation, and management of correspondences that link objects based on their properties and relationships.
## Core Components
- `Correspondence` class: Main class that represents a mapping between objects
- Concept mapping system
- Correspondence strength evaluation
## Key Features
- Manages mappings between objects in initial and target strings
- Evaluates correspondence strength based on multiple factors
- Handles concept slippages and mappings
- Supports both direct and accessory concept mappings
- Manages correspondence compatibility and support
## Correspondence Components
- `objectFromInitial`: Object from the initial string
- `objectFromTarget`: Object from the target string
- `conceptMappings`: List of concept mappings
- `accessoryConceptMappings`: Additional concept mappings
- `flipTargetObject`: Flag for target object flipping
## Main Methods
- `updateInternalStrength()`: Calculate internal correspondence strength
- `updateExternalStrength()`: Calculate external correspondence strength
- `buildCorrespondence()`: Create and establish correspondence
- `breakCorrespondence()`: Remove correspondence
- `incompatible()`: Check correspondence compatibility
- `supporting()`: Check if correspondence supports another
- `internallyCoherent()`: Check internal coherence
## Concept Mapping Types
- Distinguishing mappings
- Relevant distinguishing mappings
- Bond mappings
- Direction mappings
- Symmetric mappings
## Dependencies
- Requires `conceptMapping`, `group`, `letter`, and `workspaceStructure` modules
- Uses `formulas` for mapping calculations
- Used by the main `copycat` module
## Notes
- Correspondences are evaluated based on concept mapping strength and coherence
- The system supports both direct and indirect concept mappings
- Correspondences can be incompatible with each other
- The system handles both letter and group correspondences
- Concept slippages are tracked and managed

436
copycat/curses_reporter.py Normal file
View File

@ -0,0 +1,436 @@
import curses
import time
from .copycat import Reporter
from .bond import Bond
from .correspondence import Correspondence
from .description import Description
from .group import Group
from .letter import Letter
from .rule import Rule
class SafeSubwindow(object):
def __init__(self, window, h, w, y, x):
self.w = window.derwin(h, w, y, x)
def addnstr(self, y, x, s, n):
self.w.addnstr(y, x, s, n)
def addstr(self, y, x, s, attr=curses.A_NORMAL):
try:
self.w.addstr(y, x, s, attr)
except Exception as e:
if str(e) != 'addstr() returned ERR':
raise
def border(self):
self.w.border()
def derwin(self, h, w, y, x):
return self.w.derwin(h, w, y, x)
def erase(self):
self.w.erase()
def getch(self):
self.w.nodelay(True) # make getch() non-blocking
return self.w.getch()
def getmaxyx(self):
return self.w.getmaxyx()
def is_vacant(self, y, x):
ch_plus_attr = self.w.inch(y, x)
if ch_plus_attr == -1:
return True # it's out of bounds
return (ch_plus_attr & 0xFF) == 0x20
def refresh(self):
self.w.refresh()
class CursesReporter(Reporter):
def __init__(self, window, focus_on_slipnet=False, fps_goal=None):
curses.curs_set(0) # hide the cursor
curses.noecho() # hide keypresses
height, width = window.getmaxyx()
if focus_on_slipnet:
upperHeight = 10
else:
upperHeight = 25
answersHeight = 5
coderackHeight = height - upperHeight - answersHeight
self.focusOnSlipnet = focus_on_slipnet
self.fpsGoal = fps_goal
self.temperatureWindow = SafeSubwindow(window, height, 5, 0, 0) # TODO: use entropy (entropyWindow)
self.upperWindow = SafeSubwindow(window, upperHeight, width-5, 0, 5)
self.coderackWindow = SafeSubwindow(window, coderackHeight, width-5, upperHeight, 5)
self.answersWindow = SafeSubwindow(window, answersHeight, width-5, upperHeight + coderackHeight, 5)
self.fpsWindow = SafeSubwindow(self.answersWindow, 3, 9, answersHeight - 3, width - 14)
for w in [self.temperatureWindow, self.upperWindow, self.answersWindow, self.fpsWindow]:
w.erase()
w.border()
w.refresh()
self.answers = {}
self.fpsTicks = 0
self.fpsSince = time.time()
self.fpsMeasured = 100 # just a made-up number at first
self.fpsDelay = 0
def do_keyboard_shortcuts(self):
w = self.temperatureWindow # just a random window
ordch = w.getch()
if ordch in [ord('P'), ord('p')]:
w.addstr(0, 0, 'PAUSE', curses.A_STANDOUT)
w.refresh()
ordch = None
while ordch not in [ord('P'), ord('p'), 27, ord('Q'), ord('q')]:
time.sleep(0.1)
ordch = w.getch()
self.fpsTicks = 0
self.fpsSince = time.time()
w.erase()
w.border()
w.refresh()
if ordch in [27, ord('Q'), ord('q')]:
raise KeyboardInterrupt()
if ordch in [ord('F')]:
self.fpsGoal = (self.fpsGoal or self.fpsMeasured) * 1.25
if ordch in [ord('f')]:
self.fpsGoal = (self.fpsGoal or self.fpsMeasured) * 0.8
def report_answer(self, answer):
d = self.answers.setdefault(answer['answer'], {
'answer': answer['answer'],
'count': 0,
'sumtime': 0,
'sumtemp': 0,
})
d['count'] += 1
d['sumtemp'] += answer['temp']
d['sumtime'] += answer['time']
d['avgtemp'] = d['sumtemp'] / d['count']
d['avgtime'] = d['sumtime'] / d['count']
def fitness(d):
return 3 * d['count'] - d['avgtemp']
def represent(d):
return '%s: %d (avg time %.1f, avg temp %.1f)' % (
d['answer'], d['count'], d['avgtime'], d['avgtemp'],
)
answersToPrint = sorted(iter(self.answers.values()), key=fitness, reverse=True)
w = self.answersWindow
pageWidth = w.getmaxyx()[1]
if pageWidth >= 96:
columnWidth = (pageWidth - 6) / 2
for i, d in enumerate(answersToPrint[:3]):
w.addnstr(i+1, 2, represent(d), columnWidth)
for i, d in enumerate(answersToPrint[3:6]):
w.addnstr(i+1, pageWidth - columnWidth - 2, represent(d), columnWidth)
else:
columnWidth = pageWidth - 4
for i, d in enumerate(answersToPrint[:3]):
w.addnstr(i+1, 2, represent(d), columnWidth)
w.refresh()
def depict_fps(self):
w = self.fpsWindow
now = time.time()
elapsed = now - self.fpsSince
fps = self.fpsTicks / elapsed
if self.fpsGoal is not None:
seconds_of_work_per_frame = (elapsed / self.fpsTicks) - self.fpsDelay
desired_time_working_per_second = self.fpsGoal * seconds_of_work_per_frame
if desired_time_working_per_second < 1.0:
self.fpsDelay = (1.0 - desired_time_working_per_second) / fps
else:
self.fpsDelay = 0
w.addstr(1, 1, 'FPS:%3d' % fps, curses.A_NORMAL)
w.refresh()
self.fpsSince = now
self.fpsTicks = 0
self.fpsMeasured = fps
def report_coderack(self, coderack):
self.fpsTicks += 1 # for the purposes of FPS calculation
if self.fpsDelay:
time.sleep(self.fpsDelay)
if time.time() > self.fpsSince + 1.200:
self.depict_fps()
NUMBER_OF_BINS = 7
# Combine duplicate codelets for printing.
counts = {}
for c in coderack.codelets:
assert 1 <= c.urgency <= NUMBER_OF_BINS
key = (c.urgency, c.name)
counts[key] = counts.get(key, 0) + 1
# Sort the most common and highest-urgency codelets to the top.
entries = sorted(
(count, key[0], key[1])
for key, count in counts.items()
)
# Figure out how we'd like to render each codelet's name.
printable_entries = [
(urgency, '%s (%d)' % (name, count))
for count, urgency, name in entries
]
# Render each codelet in the appropriate column,
# as close to the top of the page as physically possible.
w = self.coderackWindow
pageHeight, pageWidth = w.getmaxyx()
columnWidth = (pageWidth - len('important-object-correspondence-scout (n)')) / (NUMBER_OF_BINS - 1)
w.erase()
for u, string in printable_entries:
# Find the highest point on the page where we could place this entry.
start_column = int((u - 1) * columnWidth)
end_column = start_column + len(string)
for r in range(pageHeight):
if all(w.is_vacant(r, c) for c in range(start_column, end_column+20)):
w.addstr(r, start_column, string)
break
w.refresh()
def slipnode_name_and_attr(self, slipnode):
if slipnode.activation == 100:
return (slipnode.name.upper(), curses.A_STANDOUT)
if slipnode.activation > 50:
return (slipnode.name.upper(), curses.A_BOLD)
else:
return (slipnode.name.lower(), curses.A_NORMAL)
def report_slipnet(self, slipnet):
if not self.focusOnSlipnet:
return
w = self.upperWindow
pageHeight, pageWidth = w.getmaxyx()
w.erase()
w.addstr(1, 2, 'Total: %d slipnodes and %d sliplinks' % (
len(slipnet.slipnodes),
len(slipnet.sliplinks),
))
for c, node in enumerate(slipnet.letters):
s, attr = self.slipnode_name_and_attr(node)
w.addstr(2, 2 * c + 2, s, attr)
for c, node in enumerate(slipnet.numbers):
s, attr = self.slipnode_name_and_attr(node)
w.addstr(3, 2 * c + 2, s, attr)
row = 4
column = 2
for node in slipnet.slipnodes:
if node not in slipnet.letters + slipnet.numbers:
s, attr = self.slipnode_name_and_attr(node)
if column + len(s) > pageWidth - 1:
row += 1
column = 2
w.addstr(row, column, s, attr)
column += len(s) + 1
w.border()
w.refresh()
#TODO: use entropy
def report_temperature(self, temperature):
self.do_keyboard_shortcuts()
w = self.temperatureWindow
height = w.getmaxyx()[0]
max_mercury = height - 4
mercury = max_mercury * temperature.value() / 100.0
for i in range(max_mercury):
ch = ' ,o%8'[int(4 * min(max(0, mercury - i), 1))]
w.addstr(max_mercury - i, 1, 3*ch)
w.addnstr(height - 2, 1, '%3d' % temperature.actual_value, 3)
w.refresh()
def length_of_workspace_object_depiction(self, o, description_structures):
result = len(str(o))
if o.descriptions:
result += 2
result += 2 * (len(o.descriptions) - 1)
for d in o.descriptions:
s, _ = self.slipnode_name_and_attr(d.descriptor)
result += len(s)
if d not in description_structures:
result += 2
result += 1
return result
def depict_workspace_object(self, w, row, column, o, maxImportance, description_structures):
if maxImportance != 0.0 and o.relativeImportance == maxImportance:
attr = curses.A_BOLD
else:
attr = curses.A_NORMAL
w.addstr(row, column, str(o), attr)
column += len(str(o))
if o.descriptions:
w.addstr(row, column, ' (', curses.A_NORMAL)
column += 2
for i, d in enumerate(o.descriptions):
if i != 0:
w.addstr(row, column, ', ', curses.A_NORMAL)
column += 2
s, attr = self.slipnode_name_and_attr(d.descriptor)
if d not in description_structures:
s = '[%s]' % s
w.addstr(row, column, s, attr)
column += len(s)
w.addstr(row, column, ')', curses.A_NORMAL)
column += 1
return column
def depict_bond(self, w, row, column, bond):
slipnet = bond.ctx.slipnet
if bond.directionCategory == slipnet.right:
s = '-- %s -->' % bond.category.name
elif bond.directionCategory == slipnet.left:
s = '<-- %s --' % bond.category.name
elif bond.directionCategory is None:
s = '<-- %s -->' % bond.category.name
if isinstance(bond.leftObject, Group):
s = 'G' + s
if isinstance(bond.rightObject, Group):
s = s + 'G'
w.addstr(row, column, s, curses.A_NORMAL)
return column + len(s)
def depict_grouping_brace(self, w, firstrow, lastrow, column):
if firstrow == lastrow:
w.addstr(firstrow, column, '}', curses.A_NORMAL)
else:
w.addstr(firstrow, column, '\\', curses.A_NORMAL)
w.addstr(lastrow, column, '/', curses.A_NORMAL)
for r in range(firstrow + 1, lastrow):
w.addstr(r, column, '|', curses.A_NORMAL)
def report_workspace(self, workspace):
if self.focusOnSlipnet:
return
slipnet = workspace.ctx.slipnet
w = self.upperWindow
pageHeight, pageWidth = w.getmaxyx()
w.erase()
w.addstr(1, 2, '%d objects (%d letters in %d groups), %d other structures (%d bonds, %d correspondences, %d descriptions, %d rules)' % (
len(workspace.objects),
len([o for o in workspace.objects if isinstance(o, Letter)]),
len([o for o in workspace.objects if isinstance(o, Group)]),
len(workspace.structures) - len([o for o in workspace.objects if isinstance(o, Group)]),
len([o for o in workspace.structures if isinstance(o, Bond)]),
len([o for o in workspace.structures if isinstance(o, Correspondence)]),
len([o for o in workspace.structures if isinstance(o, Description)]),
len([o for o in workspace.structures if isinstance(o, Rule)]),
))
group_objects = {o for o in workspace.objects if isinstance(o, Group)}
letter_objects = {o for o in workspace.objects if isinstance(o, Letter)}
group_and_letter_objects = group_objects | letter_objects
assert set(workspace.objects) == group_and_letter_objects
assert group_objects <= set(workspace.structures)
latent_groups = {o.group for o in workspace.objects if o.group is not None}
assert latent_groups <= group_objects
assert group_objects <= latent_groups
member_groups = {o for g in group_objects for o in g.objectList if isinstance(o, Group)}
assert member_groups <= group_objects
bond_structures = {o for o in workspace.structures if isinstance(o, Bond)}
known_bonds = {o.leftBond for o in group_and_letter_objects if o.leftBond is not None}
known_bonds |= {o.rightBond for o in group_and_letter_objects if o.rightBond is not None}
assert known_bonds == bond_structures
description_structures = {o for o in workspace.structures if isinstance(o, Description)}
latent_descriptions = {d for o in group_and_letter_objects for d in o.descriptions}
assert description_structures <= latent_descriptions
current_rules = set([workspace.rule]) if workspace.rule is not None else set()
correspondences_between_initial_and_target = {o for o in workspace.structures if isinstance(o, Correspondence)}
assert set(workspace.structures) == set.union(
group_objects,
bond_structures,
description_structures,
current_rules,
correspondences_between_initial_and_target,
)
for g in group_objects:
assert g.string in [workspace.initial, workspace.modified, workspace.target]
row = 2
for o in current_rules:
w.addstr(row, 2, str(o), curses.A_BOLD)
for string in [workspace.initial, workspace.modified, workspace.target]:
row += 1
letters_in_string = sorted(
(o for o in letter_objects if o.string == string),
key=lambda o: o.leftIndex,
)
groups_in_string = sorted(
(o for o in group_objects if o.string == string),
key=lambda o: o.leftIndex,
)
if groups_in_string or letters_in_string:
maxImportance = max(o.relativeImportance for o in groups_in_string + letters_in_string)
bonds_in_string = sorted(
(b for b in bond_structures if b.string == string),
key=lambda b: b.leftObject.rightIndex,
)
assert bonds_in_string == sorted(string.bonds, key=lambda b: b.leftObject.rightIndex)
startrow_for_group = {}
endrow_for_group = {}
max_column = 0
for letter in letters_in_string:
for g in groups_in_string:
if g.leftIndex == letter.leftIndex:
startrow_for_group[g] = row
if g.rightIndex == letter.rightIndex:
endrow_for_group[g] = row
column = self.depict_workspace_object(w, row, 2, letter, maxImportance, description_structures)
row += 1
max_column = max(max_column, column)
for b in bonds_in_string:
if b.leftObject.rightIndex == letter.rightIndex:
assert b.rightObject.leftIndex == letter.rightIndex + 1
column = self.depict_bond(w, row, 4, b)
row += 1
max_column = max(max_column, column)
for group in groups_in_string:
start = startrow_for_group[group]
end = endrow_for_group[group]
# Place this group's graphical depiction.
depiction_width = 3 + self.length_of_workspace_object_depiction(group, description_structures)
for firstcolumn in range(max_column, 1000):
lastcolumn = firstcolumn + depiction_width
okay = all(
w.is_vacant(r, c)
for c in range(firstcolumn, lastcolumn + 1)
for r in range(start, end + 1)
)
if okay:
self.depict_grouping_brace(w, start, end, firstcolumn + 1)
self.depict_workspace_object(w, (start + end) / 2, firstcolumn + 3, group, maxImportance, description_structures)
break
row += 1
column = 2
for o in correspondences_between_initial_and_target:
slipnet = workspace.ctx.slipnet
w.addstr(row, column, '%s (%s)' % (str(o), str([m for m in o.conceptMappings if m.label != slipnet.identity])), curses.A_NORMAL)
row += 1
column = 2
w.border()
w.refresh()

View File

@ -0,0 +1,55 @@
# Curses Reporter System
## Overview
The curses reporter system is a visualization component of the Copycat architecture that provides a terminal-based user interface for monitoring and debugging the system's operation. This system uses the curses library to create an interactive display of the system's state.
## Key Features
- Real-time state display
- Interactive monitoring
- Debug information
- System metrics visualization
- User input handling
## Display Components
1. **Main Display**
- Workspace visualization
- Slipnet state
- Coderack status
- Correspondence view
2. **Debug Panels**
- Codelet execution
- Activation levels
- Mapping strengths
- Error messages
3. **Control Interface**
- User commands
- System controls
- Display options
- Navigation
## Usage
The curses reporter is used to monitor the system:
```python
# Initialize the reporter
reporter = CursesReporter()
# Update the display
reporter.update_display()
# Handle user input
reporter.process_input()
```
## Dependencies
- Python 3.x
- curses library
- No other external dependencies required
## Related Components
- Workspace: Provides state to display
- Slipnet: Provides activation information
- Coderack: Provides execution status
- Correspondence: Provides mapping information

57
copycat/description.py Normal file
View File

@ -0,0 +1,57 @@
from .workspaceStructure import WorkspaceStructure
class Description(WorkspaceStructure):
def __init__(self, workspaceObject, descriptionType, descriptor):
WorkspaceStructure.__init__(self, workspaceObject.ctx)
self.object = workspaceObject
self.string = workspaceObject.string
self.descriptionType = descriptionType
self.descriptor = descriptor
def __repr__(self):
return '<Description: %s>' % self.__str__()
def __str__(self):
s = 'description(%s) of %s' % (self.descriptor.get_name(), self.object)
workspace = self.ctx.workspace
if self.object.string == getattr(workspace, 'initial', None):
s += ' in initial string'
else:
s += ' in target string'
return s
def updateInternalStrength(self):
self.internalStrength = self.descriptor.conceptualDepth
def updateExternalStrength(self):
self.externalStrength = (self.localSupport() +
self.descriptionType.activation) / 2
def localSupport(self):
workspace = self.ctx.workspace
described_like_self = 0
for other in workspace.objects:
if self.object == other:
continue
if self.object.isWithin(other) or other.isWithin(self.object):
continue
for description in other.descriptions:
if description.descriptionType == self.descriptionType:
described_like_self += 1
results = {0: 0.0, 1: 20.0, 2: 60.0, 3: 90.0}
if described_like_self in results:
return results[described_like_self]
return 100.0
def build(self):
self.descriptionType.buffer = 100.0
self.descriptor.buffer = 100.0
if not self.object.described(self.descriptor):
self.object.descriptions += [self]
def breakDescription(self):
workspace = self.ctx.workspace
if self in workspace.structures:
workspace.structures.remove(self)
self.object.descriptions.remove(self)

View File

@ -0,0 +1,54 @@
# Description System
## Overview
The description system is a core component of the Copycat architecture that manages the representation and generation of descriptions for objects and structures in the workspace. This system provides detailed characterizations of elements in the analogical reasoning process.
## Key Features
- Object description
- Structure characterization
- Property management
- Relationship description
- State representation
## Description Types
1. **Basic Descriptions**
- Object properties
- Structure features
- Relationship details
- State information
2. **Composite Descriptions**
- Group characteristics
- Pattern descriptions
- Hierarchical details
- Context information
3. **Special Descriptions**
- Rule descriptions
- Mapping details
- Meta-descriptions
- Derived characteristics
## Usage
Descriptions are created and managed through the description system:
```python
# Create a description
desc = Description(object)
# Add properties
desc.add_property('property_name', value)
# Generate description
text = desc.generate()
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Workspace: Contains described objects
- WorkspaceObject: Objects to describe
- Codelets: Use descriptions
- Correspondence: Maps descriptions

59
copycat/formulas.py Normal file
View File

@ -0,0 +1,59 @@
from .conceptMapping import ConceptMapping
def weightedAverage(values):
total = 0.0
totalWeights = 0.0
for value, weight in values:
total += value * weight
totalWeights += weight
if not totalWeights:
return 0.0
return total / totalWeights
def __localRelevance(string, isRelevant):
numberOfObjectsNotSpanning = 0.0
numberOfMatches = 0.0
for o in string.objects:
if not o.spansString():
numberOfObjectsNotSpanning += 1.0
if isRelevant(o):
numberOfMatches += 1.0
if numberOfObjectsNotSpanning == 1:
return 100.0 * numberOfMatches
return 100.0 * numberOfMatches / (numberOfObjectsNotSpanning - 1.0)
def localBondCategoryRelevance(string, category):
def isRelevant(o):
return o.rightBond and o.rightBond.category == category
if len(string.objects) == 1:
return 0.0
return __localRelevance(string, isRelevant)
def localDirectionCategoryRelevance(string, direction):
def isRelevant(o):
return o.rightBond and o.rightBond.directionCategory == direction
return __localRelevance(string, isRelevant)
def getMappings(objectFromInitial, objectFromTarget,
initialDescriptions, targetDescriptions):
mappings = []
for initial in initialDescriptions:
for target in targetDescriptions:
if initial.descriptionType == target.descriptionType:
if (initial.descriptor == target.descriptor or
initial.descriptor.slipLinked(target.descriptor)):
mapping = ConceptMapping(
initial.descriptionType,
target.descriptionType,
initial.descriptor,
target.descriptor,
objectFromInitial,
objectFromTarget
)
mappings += [mapping]
return mappings

View File

@ -0,0 +1,54 @@
# Formulas System
## Overview
The formulas system is a utility component of the Copycat architecture that provides mathematical and logical formulas for various calculations and evaluations throughout the system. This system implements core mathematical operations used in the analogical reasoning process.
## Key Features
- Mathematical operations
- Probability calculations
- Distance metrics
- Similarity measures
- Utility functions
## Formula Types
1. **Mathematical Formulas**
- Probability calculations
- Distance metrics
- Similarity scores
- Weight computations
2. **Evaluation Formulas**
- Fitness functions
- Quality measures
- Comparison metrics
- Ranking formulas
3. **Special Formulas**
- Temperature adjustments
- Activation functions
- Threshold calculations
- Normalization methods
## Usage
Formulas are used throughout the system:
```python
# Calculate probability
prob = formulas.calculate_probability(event)
# Compute similarity
sim = formulas.compute_similarity(obj1, obj2)
# Evaluate fitness
fitness = formulas.evaluate_fitness(solution)
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Temperature: Uses formulas for calculations
- Slipnet: Uses formulas for activation
- Workspace: Uses formulas for evaluation
- Statistics: Uses formulas for analysis

237
copycat/group.py Normal file
View File

@ -0,0 +1,237 @@
from .description import Description
from .workspaceObject import WorkspaceObject
from . import formulas
class Group(WorkspaceObject):
# pylint: disable=too-many-instance-attributes
def __init__(self, string, groupCategory, directionCategory, facet,
objectList, bondList):
# pylint: disable=too-many-arguments
WorkspaceObject.__init__(self, string)
slipnet = self.ctx.slipnet
self.groupCategory = groupCategory
self.directionCategory = directionCategory
self.facet = facet
self.objectList = objectList
self.bondList = bondList
self.bondCategory = self.groupCategory.getRelatedNode(
slipnet.bondCategory)
leftObject = objectList[0]
rightObject = objectList[-1]
self.leftIndex = leftObject.leftIndex
self.leftmost = self.leftIndex == 1
self.rightIndex = rightObject.rightIndex
self.rightmost = self.rightIndex == len(self.string)
self.descriptions = []
self.bondDescriptions = []
self.name = ''
if self.bondList and len(self.bondList):
firstFacet = self.bondList[0].facet
self.addBondDescription(
Description(self, slipnet.bondFacet, firstFacet))
self.addBondDescription(
Description(self, slipnet.bondCategory, self.bondCategory))
self.addDescription(slipnet.objectCategory, slipnet.group)
self.addDescription(slipnet.groupCategory, self.groupCategory)
if not self.directionCategory:
# sameness group - find letterCategory
letter = self.objectList[0].getDescriptor(self.facet)
self.addDescription(self.facet, letter)
if self.directionCategory:
self.addDescription(slipnet.directionCategory, self.directionCategory)
if self.spansString():
self.addDescription(slipnet.stringPositionCategory, slipnet.whole)
elif self.leftmost:
self.addDescription(slipnet.stringPositionCategory, slipnet.leftmost)
elif self.rightmost:
self.addDescription(slipnet.stringPositionCategory, slipnet.rightmost)
elif self.middleObject():
self.addDescription(slipnet.stringPositionCategory, slipnet.middle)
self.add_length_description_category()
def add_length_description_category(self):
# check whether or not to add length description category
random = self.ctx.random
slipnet = self.ctx.slipnet
probability = self.lengthDescriptionProbability()
if random.coinFlip(probability):
length = len(self.objectList)
if length < 6:
self.addDescription(slipnet.length, slipnet.numbers[length - 1])
def __str__(self):
s = self.string.__str__()
l = self.leftIndex - 1
r = self.rightIndex
return 'group[%d:%d] == %s' % (l, r - 1, s[l:r])
def getIncompatibleGroups(self):
result = []
for objekt in self.objectList:
while objekt.group:
result += [objekt.group]
objekt = objekt.group
return result
def addBondDescription(self, description):
self.bondDescriptions += [description]
def singleLetterGroupProbability(self):
slipnet = self.ctx.slipnet
temperature = self.ctx.temperature
numberOfSupporters = self.numberOfLocalSupportingGroups()
if not numberOfSupporters:
return 0.0
if numberOfSupporters == 1:
exp = 4.0
elif numberOfSupporters == 2:
exp = 2.0
else:
exp = 1.0
support = self.localSupport() / 100.0
activation = slipnet.length.activation / 100.0
supportedActivation = (support * activation) ** exp
#TODO: use entropy
return temperature.getAdjustedProbability(supportedActivation)
def flippedVersion(self):
slipnet = self.ctx.slipnet
flippedBonds = [b.flippedversion() for b in self.bondList]
flippedGroup = self.groupCategory.getRelatedNode(slipnet.flipped)
flippedDirection = self.directionCategory.getRelatedNode(
slipnet.flipped)
return Group(self.string, flippedGroup, flippedDirection,
self.facet, self.objectList, flippedBonds)
def buildGroup(self):
workspace = self.ctx.workspace
workspace.objects += [self]
workspace.structures += [self]
self.string.objects += [self]
for objekt in self.objectList:
objekt.group = self
workspace.buildDescriptions(self)
self.activateDescriptions()
def activateDescriptions(self):
for description in self.descriptions:
description.descriptor.buffer = 100.0
def lengthDescriptionProbability(self):
slipnet = self.ctx.slipnet
temperature = self.ctx.temperature
length = len(self.objectList)
if length > 5:
return 0.0
cubedlength = length ** 3
fred = cubedlength * (100.0 - slipnet.length.activation) / 100.0
probability = 0.5 ** fred
#TODO: use entropy
value = temperature.getAdjustedProbability(probability)
if value < 0.06:
value = 0.0
return value
def break_the_structure(self):
self.breakGroup()
def breakGroup(self):
workspace = self.ctx.workspace
if self.correspondence:
self.correspondence.breakCorrespondence()
if self.group:
self.group.breakGroup()
if self.leftBond:
self.leftBond.breakBond()
if self.rightBond:
self.rightBond.breakBond()
while len(self.descriptions):
description = self.descriptions[-1]
description.breakDescription()
for o in self.objectList:
o.group = None
if self in workspace.structures:
workspace.structures.remove(self)
if self in workspace.objects:
workspace.objects.remove(self)
if self in self.string.objects:
self.string.objects.remove(self)
def updateInternalStrength(self):
slipnet = self.ctx.slipnet
relatedBondAssociation = self.groupCategory.getRelatedNode(
slipnet.bondCategory).degreeOfAssociation()
bondWeight = relatedBondAssociation ** 0.98
length = len(self.objectList)
if length == 1:
lengthFactor = 5.0
elif length == 2:
lengthFactor = 20.0
elif length == 3:
lengthFactor = 60.0
else:
lengthFactor = 90.0
lengthWeight = 100.0 - bondWeight
weightList = ((relatedBondAssociation, bondWeight),
(lengthFactor, lengthWeight))
self.internalStrength = formulas.weightedAverage(weightList)
def updateExternalStrength(self):
if self.spansString():
self.externalStrength = 100.0
else:
self.externalStrength = self.localSupport()
def localSupport(self):
numberOfSupporters = self.numberOfLocalSupportingGroups()
if numberOfSupporters == 0:
return 0.0
supportFactor = min(1.0, 0.6 ** (1 / (numberOfSupporters ** 3)))
densityFactor = 100.0 * ((self.localDensity() / 100.0) ** 0.5)
return densityFactor * supportFactor
def numberOfLocalSupportingGroups(self):
count = 0
for objekt in self.string.objects:
if isinstance(objekt, Group) and self.isOutsideOf(objekt):
if (objekt.groupCategory == self.groupCategory and
objekt.directionCategory == self.directionCategory):
count += 1
return count
def localDensity(self):
numberOfSupporters = self.numberOfLocalSupportingGroups()
halfLength = len(self.string) / 2.0
return 100.0 * numberOfSupporters / halfLength
def sameGroup(self, other):
if self.leftIndex != other.leftIndex:
return False
if self.rightIndex != other.rightIndex:
return False
if self.groupCategory != other.groupCategory:
return False
if self.directionCategory != other.directionCategory:
return False
if self.facet != other.facet:
return False
return True
def distinguishingDescriptor(self, descriptor):
"""Whether no other object of the same type has the same descriptor"""
if not WorkspaceObject.distinguishingDescriptor(self, descriptor):
return False
for objekt in self.string.objects:
# check to see if they are of the same type
if isinstance(objekt, Group) and objekt != self:
# check all descriptions for the descriptor
for description in objekt.descriptions:
if description.descriptor == descriptor:
return False
return True

53
copycat/group_README.md Normal file
View File

@ -0,0 +1,53 @@
# README_group.md
## Overview
`group.py` implements the Group system, a key component of the Copycat system that manages the grouping of objects in strings. It handles the creation, evaluation, and management of groups that represent meaningful collections of objects based on their properties and relationships.
## Core Components
- `Group` class: Main class that represents a group of objects
- Group evaluation system
- Group description management
## Key Features
- Manages groups of objects in strings
- Evaluates group strength based on multiple factors
- Handles group descriptions and bond descriptions
- Supports group flipping and versioning
- Manages group compatibility and support
## Group Components
- `groupCategory`: Category of the group
- `directionCategory`: Direction of the group
- `facet`: Aspect of the group
- `objectList`: List of objects in the group
- `bondList`: List of bonds in the group
- `descriptions`: List of group descriptions
- `bondDescriptions`: List of bond descriptions
## Main Methods
- `updateInternalStrength()`: Calculate internal group strength
- `updateExternalStrength()`: Calculate external group strength
- `buildGroup()`: Create and establish group
- `breakGroup()`: Remove group
- `localSupport()`: Calculate local support
- `numberOfLocalSupportingGroups()`: Count supporting groups
- `sameGroup()`: Compare groups for equality
## Group Types
- Single letter groups
- Multi-letter groups
- Direction-based groups
- Category-based groups
- Length-based groups
## Dependencies
- Requires `description`, `workspaceObject`, and `formulas` modules
- Used by the main `copycat` module
## Notes
- Groups are evaluated based on bond association and length
- The system supports both single and multi-object groups
- Groups can have multiple descriptions and bond descriptions
- The system handles group compatibility and support
- Groups can be flipped to create alternative versions
- Length descriptions are probabilistically added based on temperature

1
copycat/gui/__init__.py Normal file
View File

@ -0,0 +1 @@
from .gui import GUI

59
copycat/gui/control.py Normal file
View File

@ -0,0 +1,59 @@
import tkinter as tk
import tkinter.ttk as ttk
from .gridframe import GridFrame
from .entry import Entry
class Control(GridFrame):
def __init__(self, parent, *args, **kwargs):
GridFrame.__init__(self, parent, *args, **kwargs)
self.paused = True
self.steps = 0
self.go = False
self.playbutton = ttk.Button(self, text='Play', command=lambda : self.toggle())
self.add(self.playbutton, 0, 0)
self.stepbutton = ttk.Button(self, text='Step', command=lambda : self.step())
self.add(self.stepbutton, 1, 0)
self.entry = Entry(self)
self.add(self.entry, 0, 1, xspan=2)
self.gobutton = ttk.Button(self, text='Go', command=lambda : self.set_go())
self.add(self.gobutton, 0, 2, xspan=2)
def play(self):
self.paused = False
self.playbutton['text'] = 'Pause'
def pause(self):
self.paused = True
self.playbutton['text'] = 'Play'
def toggle(self):
if self.paused:
self.play()
else:
self.pause()
def step(self):
self.steps += 1
def has_step(self):
if self.steps > 0:
self.steps -= 1
return True
else:
return False
def set_go(self):
self.go = True
self.play()
def get_vars(self):
return self.entry.a.get(), self.entry.b.get(), self.entry.c.get()
def reset(self):
self.go = False

27
copycat/gui/entry.py Normal file
View File

@ -0,0 +1,27 @@
import tkinter as tk
import tkinter.ttk as ttk
from .gridframe import GridFrame
class Entry(GridFrame):
def __init__(self, parent, *args, **kwargs):
GridFrame.__init__(self, parent, *args, **kwargs)
self.aLabel = ttk.Label(self, text='Initial:')
self.a = ttk.Entry(self, style='EntryStyle.TEntry')
self.add(self.aLabel, 0, 0)
self.add(self.a, 0, 1)
self.bLabel = ttk.Label(self, text='Final:')
self.b = ttk.Entry(self, style='EntryStyle.TEntry')
self.add(self.bLabel, 1, 0)
self.add(self.b, 1, 1)
self.cLabel = ttk.Label(self, text='Next:')
self.c = ttk.Entry(self, style='EntryStyle.TEntry')
self.add(self.cLabel, 2, 0)
self.add(self.c, 2, 1)
GridFrame.configure(self)

11
copycat/gui/gridframe.py Normal file
View File

@ -0,0 +1,11 @@
import tkinter as tk
import tkinter.ttk as ttk
class GridFrame(tk.Frame):
def __init__(self, parent, *args, **kwargs):
ttk.Frame.__init__(self, parent, *args, **kwargs)
def add(self, element, x, y, xspan=1, yspan=1):
element.grid(column=x, row=y, columnspan=xspan, rowspan=yspan, sticky=tk.N+tk.E+tk.S+tk.W)
tk.Grid.rowconfigure(self, x, weight=1)
tk.Grid.columnconfigure(self, y, weight=1)

96
copycat/gui/gui.py Normal file
View File

@ -0,0 +1,96 @@
import sys
import time
import tkinter as tk
import tkinter.ttk as ttk
from tkinter import scrolledtext
from tkinter import filedialog
import matplotlib.pyplot as plt
from .status import Status, StatusFrame
from .status import Plot
from .gridframe import GridFrame
from .primary import Primary
from .list import List
from .style import configure_style
from .plot import plot_answers, plot_temp
plt.style.use('dark_background')
class MainApplication(GridFrame):
def __init__(self, parent, *args, **kwargs):
GridFrame.__init__(self, parent, *args, **kwargs)
self.parent = parent
self.primary = Primary(self, *args, **kwargs)
self.add(self.primary, 0, 0, xspan=2)
self.create_widgets()
GridFrame.configure(self)
def create_widgets(self):
columns = 20
self.slipList = List(self, columns)
self.add(self.slipList, 0, 1)
self.codeletList = List(self, columns)
self.add(self.codeletList, 1, 1)
self.objectList = List(self, columns)
self.add(self.objectList, 2, 1, xspan=2)
self.graph1 = Plot(self, 'Temperature history')
self.add(self.graph1, 2, 0)
self.graph2 = Plot(self, 'Answer Distribution')
self.add(self.graph2, 3, 0)
def update(self, copycat):
self.primary.update(copycat)
slipnodes = copycat.slipnet.slipnodes
codelets = copycat.coderack.codelets
objects = copycat.workspace.objects
self.slipList.update(slipnodes, key=lambda s:s.activation,
formatter=lambda s : '{}: {}'.format(s.name, round(s.activation, 2)))
self.codeletList.update(codelets, key=lambda c:c.urgency, formatter= lambda s : '{}: {}'.format(s.name, round(s.urgency, 2)))
get_descriptors = lambda s : ', '.join('({}={})'.format(d.descriptionType.name, d.descriptor.name) for d in s.descriptions)
self.objectList.update(objects, formatter=lambda s : '{}: {}'.format(s, get_descriptors(s)))
def modifier(status):
with plt.style.context(('dark_background')):
plot_temp(copycat.temperature, status)
self.graph1.status.modifier = modifier
def reset_with_strings(self, initial, modified, target):
self.primary.reset_with_strings(initial, modified, target)
class GUI(object):
def __init__(self, title):
self.root = tk.Tk()
self.root.title(title)
tk.Grid.rowconfigure(self.root, 0, weight=1)
tk.Grid.columnconfigure(self.root, 0, weight=1)
self.app = MainApplication(self.root)
self.app.grid(row=0, column=0, sticky=tk.N+tk.S+tk.E+tk.W)
configure_style(ttk.Style())
def add_answers(self, answers):
def modifier(status):
with plt.style.context(('dark_background')):
plot_answers(answers, status)
self.app.graph2.status.modifier = modifier
def refresh(self):
self.root.update_idletasks()
self.root.update()
def paused(self):
return self.app.primary.control.paused
def update(self, copycat):
self.app.update(copycat)

26
copycat/gui/list.py Normal file
View File

@ -0,0 +1,26 @@
import tkinter as tk
import tkinter.ttk as ttk
import time
from .gridframe import GridFrame
class List(GridFrame):
def __init__(self, parent, columns, updateInterval=.1):
GridFrame.__init__(self, parent)
self.text = ttk.Label(self, anchor='w', justify=tk.LEFT, width=30)
self.add(self.text, 0, 0)
self.columns = columns
self.lastUpdated = time.time()
self.updateInterval = updateInterval
def update(self, l, key=None, reverse=False, formatter=lambda s : str(s)):
current = time.time()
if current - self.lastUpdated > self.updateInterval:
l = l[:self.columns]
if key is not None:
l = sorted(l, key=key, reverse=False)
self.text['text'] = '\n'.join(map(formatter, l))

24
copycat/gui/plot.py Normal file
View File

@ -0,0 +1,24 @@
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
import matplotlib.pyplot as plt
def plot_temp(temperature, status):
status.subplot.clear()
status.subplot.plot(temperature.history)
status.subplot.set_ylabel('Temperature')
status.subplot.set_xlabel('Time')
status.subplot.set_title('Temperature History')
def plot_answers(answers, status):
answers = sorted(answers.items(), key=lambda kv : kv[1]['count'])
objects = [t[0] for t in answers]
yvalues = [t[1]['count'] for t in answers]
y_pos = np.arange(len(objects))
status.subplot.clear()
status.subplot.bar(y_pos, yvalues, align='center', alpha=0.5)
status.subplot.set_xticks(y_pos)
status.subplot.set_xticklabels(tuple(objects))
status.subplot.set_ylabel('Count')
status.subplot.set_title('Answers')

30
copycat/gui/primary.py Normal file
View File

@ -0,0 +1,30 @@
import tkinter as tk
import tkinter.ttk as ttk
from tkinter import scrolledtext
from tkinter import filedialog
from .control import Control
from .gridframe import GridFrame
from .workspacecanvas import WorkspaceCanvas
class Primary(GridFrame):
def __init__(self, parent, *args, **kwargs):
GridFrame.__init__(self, parent, *args, **kwargs)
self.canvas = WorkspaceCanvas(self)
self.add(self.canvas, 0, 0, xspan=2)
self.control = Control(self)
self.add(self.control, 0, 2)
GridFrame.configure(self)
def update(self, copycat):
self.canvas.update(copycat)
def reset_with_strings(self, initial, modified, target):
self.canvas.reset_with_strings(initial, modified, target)
self.control.reset()

66
copycat/gui/status.py Normal file
View File

@ -0,0 +1,66 @@
import matplotlib
matplotlib.use("TkAgg")
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk
from matplotlib.figure import Figure
import tkinter as tk
import tkinter.ttk as ttk
import time
import matplotlib.animation as animation
import matplotlib.pyplot as plt
plt.style.use('dark_background')
from .gridframe import GridFrame
class Plot(GridFrame):
def __init__(self, parent, title):
GridFrame.__init__(self, parent)
self.status = Status()
self.sframe = StatusFrame(self, self.status, title)
self.add(self.sframe, 0, 0, xspan=2)
self.savebutton = ttk.Button(self, text='Save to path:', command=lambda : self.save())
self.add(self.savebutton, 0, 1)
self.pathentry = ttk.Entry(self, style='EntryStyle.TEntry', textvariable='output/dist.png')
self.add(self.pathentry, 1, 1)
def save(self):
path = self.pathentry.get()
if len(path) > 0:
try:
self.status.figure.savefig(path)
except Exception as e:
print(e)
class StatusFrame(ttk.Frame):
def __init__(self, parent, status, title):
ttk.Frame.__init__(self, parent)
self.status = status
self.canvas = FigureCanvasTkAgg(status.figure, self)
self.canvas.show()
self.canvas.get_tk_widget().pack(side=tk.BOTTOM, fill=tk.BOTH, expand=True)
self.animation = animation.FuncAnimation(status.figure, lambda i : status.update_plots(i), interval=1000)
class Status(object):
def __init__(self):
self.figure = Figure(figsize=(5,5), dpi=100)
self.subplot = self.figure.add_subplot(111)
self.x = []
self.y = []
def modifier(status):
with plt.style.context(('dark_background')):
status.subplot.plot(status.x, status.y)
self.modifier = modifier
self.update_plots(0)
def update_plots(self, i):
self.subplot.clear()
self.modifier(self)

33
copycat/gui/style.py Normal file
View File

@ -0,0 +1,33 @@
style_dict = dict(foreground='white',
background='black')
map_options = dict(
foreground=[('disabled', 'black'),
('pressed', 'white'),
('active', 'white')],
background=[('disabled', 'black'),
('pressed', '!focus', 'black'),
('active', 'black')],
highlightcolor=[('focus', 'black'),
('!focus', 'black')])
def configure_style(style):
style.configure('TButton', **style_dict)
style.map('TButton', **map_options)
style.configure('TLabel', **style_dict)
#style.configure('TEntry', **style_dict)
#style.map('TEntry', **map_options)
# A hack to change entry style
style.element_create("plain.field", "from", "clam")
style.layout("EntryStyle.TEntry",
[('Entry.plain.field', {'children': [(
'Entry.background', {'children': [(
'Entry.padding', {'children': [(
'Entry.textarea', {'sticky': 'nswe'})],
'sticky': 'nswe'})], 'sticky': 'nswe'})],
'border':'2', 'sticky': 'nswe'})])
style.configure("EntryStyle.TEntry",
background="black",
foreground="white",
fieldbackground="black")

View File

@ -0,0 +1,70 @@
import tkinter as tk
import tkinter.ttk as ttk
from .gridframe import GridFrame
font1Size = 32
font1 = ('Helvetica', font1Size)
class WorkspaceCanvas(GridFrame):
def __init__(self, parent, *args, **kwargs):
GridFrame.__init__(self, parent, *args, **kwargs)
self.chars = []
self.initial = ''
self.modified = ''
self.target = ''
self.answer = ''
self.changed = False
self.canvas = tk.Canvas(self, background='black')
#self.canvas['width'] = 1600
self.add(self.canvas, 0, 0)
GridFrame.configure(self)
def update(self, copycat):
answer = '' if copycat.workspace.rule is None else copycat.workspace.rule.buildTranslatedRule()
if answer != self.answer:
self.changed = True
if self.changed:
self.canvas.delete('all')
del self.chars[:]
self.add_text()
def add_text(self):
padding = 100
def add_sequences(sequences, x, y):
for sequence in sequences:
x += padding
if sequence is None:
sequence = ''
for char in sequence:
self.chars.append((char, (x, y)))
self.canvas.create_text(x, y, text=char, anchor=tk.NW, font=font1, fill='white')
x += font1Size
return x, y
x = 0
y = padding
add_sequences([self.initial, self.modified], x, y)
x = 0
y += padding
add_sequences([self.target, self.answer], x, y)
def reset_with_strings(self, initial, modified, target):
if initial != self.initial or \
modified != self.modified or \
target != self.target:
self.changed = True
self.initial = initial
self.modified = modified
self.target = target

9
copycat/io.py Normal file
View File

@ -0,0 +1,9 @@
def save_answers(answers, filename):
answers = sorted(answers.items(), key=lambda kv : kv[1]['count'])
keys = [k for k, v in answers]
counts = [str(v['count']) for k, v in answers]
with open(filename, 'w') as outfile:
outfile.write(','.join(keys))
outfile.write('\n')
outfile.write(','.join(counts))

54
copycat/io_README.md Normal file
View File

@ -0,0 +1,54 @@
# Input/Output System
## Overview
The input/output system is a utility component of the Copycat architecture that handles file and data input/output operations. This system provides interfaces for reading and writing data to and from various sources.
## Key Features
- File operations
- Data parsing
- Format conversion
- Error handling
- State management
## Operation Types
1. **File Operations**
- File reading
- File writing
- File management
- Path handling
2. **Data Operations**
- Data parsing
- Format conversion
- Data validation
- Error handling
3. **Special Operations**
- Configuration loading
- Logging
- Debug output
- State persistence
## Usage
I/O operations are performed through the I/O system:
```python
# Read from a file
data = io.read_file(file_path)
# Write to a file
io.write_file(file_path, data)
# Parse data
parsed = io.parse_data(data)
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Workspace: Uses I/O for data loading
- Statistics: Uses I/O for logging
- Curses Reporter: Uses I/O for display
- Problem: Uses I/O for problem loading

49
copycat/letter.py Normal file
View File

@ -0,0 +1,49 @@
from .workspaceObject import WorkspaceObject
class Letter(WorkspaceObject):
def __init__(self, string, position, length):
WorkspaceObject.__init__(self, string)
workspace = self.ctx.workspace
workspace.objects += [self]
string.objects += [self]
self.leftIndex = position
self.leftmost = self.leftIndex == 1
self.rightIndex = position
self.rightmost = self.rightIndex == length
def describe(self, position, length):
slipnet = self.ctx.slipnet
if length == 1:
self.addDescription(slipnet.stringPositionCategory, slipnet.single)
if self.leftmost:
self.addDescription(slipnet.stringPositionCategory, slipnet.leftmost)
if self.rightmost:
self.addDescription(slipnet.stringPositionCategory, slipnet.rightmost)
if position * 2 == length + 1:
self.addDescription(slipnet.stringPositionCategory, slipnet.middle)
def __repr__(self):
return '<Letter: %s>' % self.__str__()
def __str__(self):
if not self.string:
return ''
i = self.leftIndex - 1
if len(self.string) <= i:
raise ValueError('len(self.string) <= self.leftIndex :: %d <= %d',
len(self.string), self.leftIndex)
return self.string[i]
def distinguishingDescriptor(self, descriptor):
"""Whether no other object of the same type has the same descriptor"""
if not WorkspaceObject.distinguishingDescriptor(self, descriptor):
return False
for objekt in self.string.objects:
# check to see if they are of the same type
if isinstance(objekt, Letter) and objekt != self:
# check all descriptions for the descriptor
for description in objekt.descriptions:
if description.descriptor == descriptor:
return False
return True

54
copycat/letter_README.md Normal file
View File

@ -0,0 +1,54 @@
# Letter System
## Overview
The letter system is a specialized component of the Copycat architecture that handles the representation and manipulation of individual letters and characters in the workspace. This system manages the properties and behaviors of letter objects in the analogical reasoning process.
## Key Features
- Letter representation
- Character properties
- Letter manipulation
- Pattern matching
- State management
## Letter Types
1. **Basic Letters**
- Alphabetic characters
- Numeric characters
- Special characters
- Whitespace
2. **Structured Letters**
- Grouped letters
- Bonded letters
- Hierarchical letters
- Pattern letters
3. **Special Letters**
- Rule letters
- Mapping letters
- Context letters
- Meta-letters
## Usage
Letter operations are performed through the letter system:
```python
# Create a letter
letter = Letter(char, properties)
# Access letter properties
char = letter.get_char()
# Modify letter state
letter.set_properties(new_properties)
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Workspace: Contains letter objects
- WorkspaceObject: Base class for letters
- Codelets: Operate on letters
- Correspondence: Maps between letters

20
copycat/plot.py Normal file
View File

@ -0,0 +1,20 @@
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
import matplotlib.pyplot as plt
def plot_answers(answers, show=True, save=True, filename='distribution.png'):
answers = sorted(answers.items(), key=lambda kv : kv[1]['count'])
objects = [t[0] + ' (temp:{})'.format(round(t[1]['avgtemp'], 2)) for t in answers]
yvalues = [t[1]['count'] for t in answers]
y_pos = np.arange(len(objects))
plt.bar(y_pos, yvalues, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.ylabel('Count')
plt.title('Answers')
if show:
plt.show()
if save:
plt.savefig('output/{}'.format(filename))

56
copycat/plot_README.md Normal file
View File

@ -0,0 +1,56 @@
# Plot System
## Overview
The plot system is a visualization component of the Copycat architecture that provides plotting and graphing capabilities for analyzing and displaying system data. This system helps in visualizing various metrics and relationships in the analogical reasoning process.
## Key Features
- Data visualization
- Graph generation
- Metric plotting
- Analysis display
- State management
## Plot Types
1. **Performance Plots**
- Activation curves
- Execution timelines
- Success rates
- Error distributions
2. **System Plots**
- Object counts
- Link distributions
- State changes
- Memory usage
3. **Analysis Plots**
- Pattern frequencies
- Relationship maps
- Similarity matrices
- Correlation graphs
## Usage
Plots are generated through the plot system:
```python
# Create a plot
plot = Plot(data, plot_type)
# Configure plot
plot.set_title("Title")
plot.set_labels("X", "Y")
# Display plot
plot.show()
```
## Dependencies
- Python 3.x
- matplotlib library
- No other external dependencies required
## Related Components
- Statistics: Provides data for plotting
- Curses Reporter: Uses plots for display
- Workspace: Provides object data
- Slipnet: Provides activation data

69
copycat/problem.py Normal file
View File

@ -0,0 +1,69 @@
from .copycat import Copycat
from pprint import pprint
class Problem:
def __init__(self, initial, modified, target, iterations, distributions=None, formulas=None):
self.formulas = formulas
if formulas is not None:
assert hasattr(Copycat(), 'temperature')
else:
if hasattr(Copycat(), 'temperature'):
self.formulas = set(Copycat().temperature.adj_formulas())
print(self.formulas)
self.initial = initial
self.modified = modified
self.target = target
self.iterations = iterations
if distributions is None:
self.distributions = self.solve()
else:
self.distributions = distributions
print(self.formulas)
def test(self, comparison, expected=None):
print('-' * 120)
print('Testing copycat problem: {} : {} :: {} : _'.format(self.initial,
self.modified,
self.target))
print('expected:')
if expected is None:
expected = self.distributions
pprint(expected)
actual = self.solve()
print('actual:')
pprint(actual)
comparison(actual, expected)
print('-' * 120)
def solve(self):
print('-' * 120)
print('Testing copycat problem: {} : {} :: {} : _'.format(self.initial,
self.modified,
self.target))
copycat = Copycat()
answers = dict()
if self.formulas == None:
if hasattr(copycat, 'temperature'):
formula = copycat.temperature.getAdj()
else:
formula = None
answers[formula] = copycat.run(self.initial,
self.modified,
self.target,
self.iterations)
else:
print(self.formulas)
for formula in self.formulas:
copycat.temperature.useAdj(formula)
answers[formula] = copycat.run(self.initial,
self.modified,
self.target,
self.iterations)
print('Done with {}'.format(formula))
return answers
def generate(self):
self.distributions = self.solve()

54
copycat/problem_README.md Normal file
View File

@ -0,0 +1,54 @@
# Problem System
## Overview
The problem system is a core component of the Copycat architecture that manages the representation and handling of analogical reasoning problems. This system defines the structure and properties of problems that the system attempts to solve.
## Key Features
- Problem representation
- Problem loading
- Problem validation
- Solution tracking
- State management
## Problem Types
1. **Basic Problems**
- String problems
- Pattern problems
- Rule problems
- Mapping problems
2. **Composite Problems**
- Multi-step problems
- Hierarchical problems
- Network problems
- Context problems
3. **Special Problems**
- Test problems
- Debug problems
- Meta-problems
- Derived problems
## Usage
Problems are created and managed through the problem system:
```python
# Create a problem
problem = Problem(initial_state, target_state)
# Load a problem
problem = Problem.load_from_file(file_path)
# Solve a problem
solution = problem.solve()
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Workspace: Contains problem state
- Slipnet: Provides concepts for solving
- Codelets: Operate on problems
- Correspondence: Maps problem elements

43
copycat/randomness.py Normal file
View File

@ -0,0 +1,43 @@
import bisect
import math
import random
def accumulate(iterable):
total = 0
for v in iterable:
total += v
yield total
class Randomness(object):
def __init__(self, seed=None):
self.rng = random.Random(seed)
def coinFlip(self, p=0.5):
return self.rng.random() < p
def choice(self, seq):
return self.rng.choice(seq)
def weighted_choice(self, seq, weights):
if not seq:
# Many callers rely on this behavior.
return None
else:
cum_weights = list(accumulate(weights))
total = cum_weights[-1]
return seq[bisect.bisect_left(cum_weights, self.rng.random() * total)]
def weighted_greater_than(self, first, second):
total = first + second
if total == 0:
return False
return self.coinFlip(float(first) / total)
def sqrtBlur(self, value):
# This is exceedingly dumb, but it matches the Java code.
root = math.sqrt(value)
if self.coinFlip():
return value + root
return value - root

View File

@ -0,0 +1,54 @@
# Randomness System
## Overview
The randomness system is a utility component of the Copycat architecture that provides controlled random number generation and probabilistic operations. This system ensures consistent and reproducible random behavior across the analogical reasoning process.
## Key Features
- Random number generation
- Probability distributions
- State management
- Seed control
- Reproducibility
## Operation Types
1. **Basic Operations**
- Random number generation
- Probability sampling
- Distribution selection
- State initialization
2. **Advanced Operations**
- Weighted selection
- Distribution mixing
- State transitions
- Pattern generation
3. **Special Operations**
- Seed management
- State persistence
- Debug control
- Test reproducibility
## Usage
Random operations are performed through the randomness system:
```python
# Generate a random number
value = randomness.random()
# Sample from a distribution
sample = randomness.sample(distribution)
# Set random seed
randomness.set_seed(seed)
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Codelets: Use randomness for selection
- Temperature: Uses randomness for control
- Statistics: Uses randomness for sampling
- Workspace: Uses randomness for operations

9
copycat/replacement.py Normal file
View File

@ -0,0 +1,9 @@
from .workspaceStructure import WorkspaceStructure
class Replacement(WorkspaceStructure):
def __init__(self, ctx, objectFromInitial, objectFromModified, relation):
WorkspaceStructure.__init__(self, ctx)
self.objectFromInitial = objectFromInitial
self.objectFromModified = objectFromModified
self.relation = relation

View File

@ -0,0 +1,54 @@
# Replacement System
## Overview
The replacement system is a utility component of the Copycat architecture that manages the substitution and replacement of objects and structures in the workspace. This system handles the transformation of elements during the analogical reasoning process.
## Key Features
- Object replacement
- Structure transformation
- Pattern substitution
- State management
- History tracking
## Operation Types
1. **Basic Operations**
- Element replacement
- Pattern substitution
- Structure transformation
- State updates
2. **Advanced Operations**
- Chain replacements
- Group transformations
- Context updates
- History tracking
3. **Special Operations**
- Rule application
- Mapping translation
- Meta-replacements
- Derived transformations
## Usage
Replacements are performed through the replacement system:
```python
# Replace an object
new_obj = replacement.replace(old_obj, new_obj)
# Apply a transformation
result = replacement.transform(obj, rule)
# Track changes
history = replacement.get_history()
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Workspace: Contains objects to replace
- Rule: Provides replacement rules
- Correspondence: Maps replacements
- Codelets: Execute replacements

149
copycat/rule.py Normal file
View File

@ -0,0 +1,149 @@
import logging
from .workspaceStructure import WorkspaceStructure
from . import formulas
class Rule(WorkspaceStructure):
def __init__(self, ctx, facet, descriptor, category, relation):
WorkspaceStructure.__init__(self, ctx)
self.facet = facet
self.descriptor = descriptor
self.category = category
self.relation = relation
def __str__(self):
if not self.facet:
return 'Empty rule'
return 'replace %s of %s %s by %s' % (
self.facet.name, self.descriptor.name,
self.category.name, self.relation.name)
def updateExternalStrength(self):
self.externalStrength = self.internalStrength
def updateInternalStrength(self):
workspace = self.ctx.workspace
if not (self.descriptor and self.relation):
self.internalStrength = 50.0
return
averageDepth = (self.descriptor.conceptualDepth +
self.relation.conceptualDepth) / 2.0
averageDepth **= 1.1 # LSaldyt: This value (1.1) seems 100% contrived.
# see if the object corresponds to an object
# if so, see if the descriptor is present (modulo slippages) in the
# corresponding object
changedObjects = [o for o in workspace.initial.objects if o.changed]
changed = changedObjects[0]
sharedDescriptorTerm = 0.0
if changed and changed.correspondence:
targetObject = changed.correspondence.objectFromTarget
slippages = workspace.slippages()
slipnode = self.descriptor.applySlippages(slippages)
if not targetObject.described(slipnode):
self.internalStrength = 0.0
return
sharedDescriptorTerm = 100.0
conceptual_height = (100.0 - self.descriptor.conceptualDepth) / 10.0 # LSaldyt: 10?
sharedDescriptorWeight = conceptual_height ** 1.4 # LSaldyt: 1.4 is also seemingly contrived
depthDifference = 100.0 - abs(self.descriptor.conceptualDepth -
self.relation.conceptualDepth)
weights = ((depthDifference, 12), # LSaldyt: ???
(averageDepth, 18), # ????
(sharedDescriptorTerm, sharedDescriptorWeight)) # 12 and 18 can be reduced to 2 and 3, depending on sharedDescriptorWeight
self.internalStrength = formulas.weightedAverage(weights)
if self.internalStrength > 100.0: # LSaldyt: A better formula wouldn't need to do this.
self.internalStrength = 100.0
def ruleEqual(self, other):
if not other:
return False
if self.relation != other.relation:
return False
if self.facet != other.facet:
return False
if self.category != other.category:
return False
if self.descriptor != other.descriptor:
return False
return True
def activateRuleDescriptions(self):
if self.relation:
self.relation.buffer = 100.0
if self.facet:
self.facet.buffer = 100.0
if self.category:
self.category.buffer = 100.0
if self.descriptor:
self.descriptor.buffer = 100.0
def incompatibleRuleCorrespondence(self, correspondence):
workspace = self.ctx.workspace
if not correspondence:
return False
# find changed object
changeds = [o for o in workspace.initial.objects if o.changed]
if not changeds:
return False
changed = changeds[0]
if correspondence.objectFromInitial != changed:
return False
# it is incompatible if the rule descriptor is not in the mapping list
return any(m.initialDescriptor == self.descriptor
for m in correspondence.conceptMappings)
def __changeString(self, string):
slipnet = self.ctx.slipnet
# applies the changes to self string ie. successor
if self.facet == slipnet.length:
if self.relation == slipnet.predecessor:
return string[:-1]
elif self.relation == slipnet.successor:
# This seems to be happening at the wrong level of abstraction.
# "Lengthening" is not an operation that makes sense on strings;
# it makes sense only on *groups*, and here we've lost the
# "groupiness" of this string. What gives?
return string + string[0]
return string
# apply character changes
if self.relation == slipnet.predecessor:
if 'a' in string:
return None
return ''.join(chr(ord(c) - 1) for c in string)
elif self.relation == slipnet.successor:
if 'z' in string:
return None
return ''.join(chr(ord(c) + 1) for c in string)
else:
return self.relation.name.lower()
def buildTranslatedRule(self):
workspace = self.ctx.workspace
if not (self.descriptor and self.relation):
return workspace.targetString
slippages = workspace.slippages()
self.category = self.category.applySlippages(slippages)
self.facet = self.facet.applySlippages(slippages)
self.descriptor = self.descriptor.applySlippages(slippages)
self.relation = self.relation.applySlippages(slippages)
# generate the final string
changeds = [o for o in workspace.target.objects if
o.described(self.descriptor) and
o.described(self.category)]
if len(changeds) == 0:
return workspace.targetString
elif len(changeds) > 1:
logging.info("More than one letter changed. Sorry, I can't solve problems like this right now.")
return None
else:
changed = changeds[0]
logging.debug('changed object = %s', changed)
left = changed.leftIndex - 1
right = changed.rightIndex
s = workspace.targetString
changed_middle = self.__changeString(s[left:right])
if changed_middle is None:
return None
return s[:left] + changed_middle + s[right:]

47
copycat/rule_README.md Normal file
View File

@ -0,0 +1,47 @@
# README_rule.md
## Overview
`rule.py` implements the Rule system, a key component of the Copycat system that manages the transformation rules used in analogical reasoning. It handles the creation, evaluation, and application of rules that describe how to transform strings based on their properties and relationships.
## Core Components
- `Rule` class: Main class that represents a transformation rule
- Rule evaluation system
- Rule translation and application system
## Key Features
- Defines transformation rules with facets, descriptors, categories, and relations
- Evaluates rule strength based on multiple factors
- Supports rule translation through concept slippages
- Handles string transformations based on rules
- Manages rule compatibility with correspondences
## Rule Components
- `facet`: The aspect of the object to change (e.g., length)
- `descriptor`: The property being changed
- `category`: The type of object being changed
- `relation`: The transformation to apply
## Main Methods
- `updateInternalStrength()`: Calculate rule strength
- `updateExternalStrength()`: Update external strength
- `activateRuleDescriptions()`: Activate rule-related concepts
- `buildTranslatedRule()`: Apply rule to target string
- `incompatibleRuleCorrespondence()`: Check rule-correspondence compatibility
- `ruleEqual()`: Compare rules for equality
## Rule Types
- Length-based rules (predecessor, successor)
- Character-based rules (predecessor, successor)
- Category-based rules
## Dependencies
- Requires `workspaceStructure` and `formulas` modules
- Uses `logging` for debug output
- Used by the main `copycat` module
## Notes
- Rules are evaluated based on conceptual depth and descriptor sharing
- Rule strength is calculated using weighted averages
- Rules can be translated through concept slippages
- The system supports both single-character and length-based transformations
- Rules can be incompatible with certain correspondences

4
copycat/sampleText.txt Normal file
View File

@ -0,0 +1,4 @@
1,2
3,4
7,7
100,100

27
copycat/sliplink.py Normal file
View File

@ -0,0 +1,27 @@
class Sliplink(object):
def __init__(self, source, destination, label=None, length=0.0):
self.source = source
self.destination = destination
self.label = label
self.fixedLength = length
source.outgoingLinks += [self]
destination.incomingLinks += [self]
def degreeOfAssociation(self):
if self.fixedLength > 0 or not self.label:
return 100.0 - self.fixedLength
return self.label.degreeOfAssociation()
def intrinsicDegreeOfAssociation(self):
if self.fixedLength > 1:
return 100.0 - self.fixedLength
if self.label:
return 100.0 - self.label.intrinsicLinkLength
return 0.0
def spread_activation(self):
self.destination.buffer += self.intrinsicDegreeOfAssociation()
def points_at(self, other):
return self.destination == other

View File

@ -0,0 +1,54 @@
# Sliplink System
## Overview
The sliplink system is a core component of the Copycat architecture that manages the connections between nodes in the slipnet. This system handles the creation, modification, and traversal of links between conceptual nodes in the network.
## Key Features
- Link representation
- Connection management
- Weight handling
- State tracking
- Event processing
## Link Types
1. **Basic Links**
- Category links
- Property links
- Instance links
- Similarity links
2. **Structural Links**
- Hierarchy links
- Composition links
- Association links
- Pattern links
3. **Special Links**
- Rule links
- Context links
- Meta-links
- Derived links
## Usage
Sliplinks are created and managed through the sliplink system:
```python
# Create a sliplink
link = Sliplink(source_node, target_node, weight)
# Access link properties
weight = link.get_weight()
# Modify link state
link.set_weight(new_weight)
```
## Dependencies
- Python 3.x
- No external dependencies required
## Related Components
- Slipnet: Contains sliplinks
- Slipnode: Connected by sliplinks
- Codelets: Use sliplinks for traversal
- Workspace: Uses links for relationships

Some files were not shown because too many files have changed in this diff Show More