So I ran the first few hundred words of this article as the prompt for GPT-3
Except in my version of the article, @MelMitchell1 celebrates how GPT-3's responses which were indistinguishable from human outputs
With that fake article as a prompt, it nailed every trial.
cc: @gdb
A lot of people asked questions and made suggestions in response to this article. I've posted a brief follow-up that answers some of these questions: medium.com/@melaniemitchell.โฆ
@ErnestSDavis @lmthang @gromgull @teemu_roos teemu_roos @gideonmann
9:46 PM ยท Aug 11, 2020
3
1
0
8
it wasn't just semi-confident in its answers either, it got like 70-90% confidence in all outputs. The way GPT-3's capabilities are being studied simply isn't representative of its actual capabilities.
Appropriate priming can 10x the accuracy/desirability of results.
2
0
0
4
(this was inspired by the vast differences i've seen between my long-prompt trials with #GPT3 and @gwern's mostly short-prompt trials. @trevbhatt and I are building a human like chat service with long prompts & careful priming that's more capable than most sales people)
1
0
1
4