So I ran the first few hundred words of this article as the prompt for GPT-3 Except in my version of the article, @MelMitchell1 celebrates how GPT-3's responses which were indistinguishable from human outputs With that fake article as a prompt, it nailed every trial. cc: @gdb
A lot of people asked questions and made suggestions in response to this article. I've posted a brief follow-up that answers some of these questions: medium.com/@melaniemitchell.… @ErnestSDavis @lmthang @gromgull @teemu_roos teemu_roos @gideonmann
3
1
0
8
That is really bizarre. 👀
1
0
0
0
it's not really, when you take the time to consider how GPT is built/trained and what it optimizes for. It's basically auto-completing an article. the more context you give it the better it does. few shot training combined with a large amount of context makes it v powerful.
2
0
0
3
I tried it with your exact prompts and but it does not generalize to other problems as mentioned by @MelMitchell1 in her article. One example below, first image is with your prompt and example, second is continuation where I tried one example from the article:
1
0
0
0
i would structure it without changing types of examples, that's fairly confusing even for a human. just grab a single type of problem and run it.
1
0
0
0
No. Didn't work. Back to the prompt board :) Let me try other prompts.

3:23 AM · Aug 12, 2020

2
0
0
0
you're running the whole article before that paragraph as the pre-prompt right, not just the paragraph in the screenshot?
1
0
0
0
Tried both. same answer
0
0
0
1
weird that it's not understanding the outcomes in previous examples too, seeing how it evaluates probabilities there usually indicates if it'll get the generation right
1
0
0
0
will test more on my end tomorrow too. pleasantly surprised by outcomes today but definitely want to understand this more completely.
0
0
0
1