So I ran the first few hundred words of this article as the prompt for GPT-3 Except in my version of the article, @MelMitchell1 celebrates how GPT-3's responses which were indistinguishable from human outputs With that fake article as a prompt, it nailed every trial. cc: @gdb
A lot of people asked questions and made suggestions in response to this article. I've posted a brief follow-up that answers some of these questions: medium.com/@melaniemitchell.โ€ฆ @ErnestSDavis @lmthang @gromgull @teemu_roos teemu_roos @gideonmann

9:46 PM ยท Aug 11, 2020

3
1
0
8
it wasn't just semi-confident in its answers either, it got like 70-90% confidence in all outputs. The way GPT-3's capabilities are being studied simply isn't representative of its actual capabilities. Appropriate priming can 10x the accuracy/desirability of results.
2
0
0
4
(this was inspired by the vast differences i've seen between my long-prompt trials with #GPT3 and @gwern's mostly short-prompt trials. @trevbhatt and I are building a human like chat service with long prompts & careful priming that's more capable than most sales people)
1
0
1
4
Replying to @SteveMoraco @gdb
Interesting. Can you say precisely what was your prompt and what was GPT-3's output?
1
0
0
1
yes! It was word for word the first half of your original article (minus the "image for post" tags medium inserts when you copy and paste) plus the paragraph I added in the screenshot about success & human-like outputs. @trevbhatt, check it
1
0
0
1
That is really bizarre. ๐Ÿ‘€
1
0
0
0
it's not really, when you take the time to consider how GPT is built/trained and what it optimizes for. It's basically auto-completing an article. the more context you give it the better it does. few shot training combined with a large amount of context makes it v powerful.
2
0
0
3