From f87a57354a1e575181e760fdaedbb2c2d5cf9fa0 Mon Sep 17 00:00:00 2001 From: =?utf8?q?Fran=C3=A7ois=20Fleuret?= Date: Sat, 22 Jun 2024 15:24:44 +0200 Subject: [PATCH] Update. --- README.txt | 60 ++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 43 insertions(+), 17 deletions(-) diff --git a/README.txt b/README.txt index 489a792..af96ee9 100644 --- a/README.txt +++ b/README.txt @@ -1,28 +1,54 @@ -18.10.2023 -./main.py --task=qmlp --model=352M --nb_train_samples=250000 --result_dir=results_qmlp_352M --batch_size=2 +Trying to make GPTs build their own "culture". -~11h per epoch on 3090 Ti +* Motivation -====================================================================== -For the stack experiment: +The original motivation of this experiment is the hypothesis that +high-level cognition emerges from the competition among humans in the +space of language and ideas. -./main.py --task=stack +More precisely, communicating agents try to out-do competitors by +creating stuff that is smart but doable, e.g. some other agents get +it, but not all. Then, that smart thing is added to the "culture", +they all learn and get to understand it, and it repeats. -Takes ~1h10min on a 4090. +* Setup -====================================================================== -For the arithmetic expressions experiments +It starts with a "world model" that they got before they communicate, +and from there, they try to "be smart" by proposing quizzes that can +be solved but not by everybody. -# 38M parameters / 250k samples +There are 5 competing GPTs. -./main.py --task=expr +The "world" is a 6x8 grid with one or two "birds" moving in a straight +line and bouncing on the world's borders. The colors correspond to a +fixed "z-buffer order". It could be another "world", but this one has +objectness, occlusion, and motion. -# 352M parameters / 2.5M samples, reaches 99.80% after 12 epochs, the - learning rate schedule is obviously terrible +Given a random world state, and the state after two iterations of +birds moving, a "quiz" is to predict the second frame, given the +first, or the opposite. -./main.py --task=expr --nb_blocks=48 --dim_model=1024 --nb_train_samples=2500000 --result_dir=results_expr_48b_d1024_2.5M -====================================================================== -25.07.2023 +My home-baked GPT-37M trained with 250k solves this with ~99% success. -./main.py --task=sandbox --nb_train_samples=10000 --nb_test_samples=1000 --nb_blocks=4 --nb_heads=1 --nb_epochs=20 +At every iteration, we select the GPT with the lowest test accuracy, +and run one epoch. If its test accuracy got higher than 97.5%, it will +create new quizzes. To do so, it generates a large number of pairs of +frames, and checks which ones of these quizzes are hard but not too +hard, which means + +[THIS IS THE IMPORTANT BIT]: + +it can be solved, in both time directions, by all the other GPTs **but +one** + +The both time directions is to avoid a simple type of quizzes which is +simply to deal with noise in the first frame. + +The GPT generates 1000 of such quizzes, that are added to the +"culture", i.e. the training set. + +Then training resumes. + +The hope is that interesting concepts emerge (connectivity, symmetry, +interior/exterior, shape vocabulary, etc.) -- 2.39.5