So I ran the first few hundred words of this article as the prompt for GPT-3 Except in my version of the article, @MelMitchell1 celebrates how GPT-3's responses which were indistinguishable from human outputs With that fake article as a prompt, it nailed every trial. cc: @gdb
it's not really, when you take the time to consider how GPT is built/trained and what it optimizes for. It's basically auto-completing an article. the more context you give it the better it does. few shot training combined with a large amount of context makes it v powerful.