I've noticed a number of people using AI Dungeon to test GPT-3's abilities. While it's a great way to see how GPT-3 can power an interesting application. It's a poor test of GPT-3's abilities in general. The first generation of any custom prompt is actually GPT-2.
20
28
11
171
Are there any other differences you can tell us about? Prepending, separating, or wrapping input? Fine tuning on some story focused corpus? Context size limits? Something else?
1
0
0
1
Replying to @DerekMc00
We cut off the generation at certain points (trailing sentences etc...) Disable certain tokens to improve performance or make generation safer, fine-tune on text adventures and only use the last ~1000 tokens of context.

5:04 PM · Aug 2, 2020

5
0
1
12
Replying to @nickwalton00
So that’s why it tends to go off rails after a bit. It’s fun for a while but it’s too open ended to be a proper game. What options have you considered? Maybe ask GPT to make a summary story before forgetting and then keep that? Stronger memory of place, genre and characters?
0
0
0
6
Replying to @nickwalton00
Do you do anything to tell GPT-2/3 about anything that happened prior to ~1000 tokens ago? Or is everything just about what's happening right "now"-ish?
0
0
0
0
Replying to @nickwalton00
The last ~1000 tokens of context "to be remembered" and regular together or only regular? I.e. does remembered stuff have its own space in the prompt?
0
0
0
0
Replying to @nickwalton00
Thank you for explaining all of this. While I'm disappointed to hear, it doesn't surprise me as I know how "Open"AI tends to be. If it were your decision, maybe you'd be better than them and not implement any of this silly stuff. Anyone hurt by text only has themselves to blame.
0
0
0
1