laarc.io. Looking for AI work. DMs open. ML discord: discordapp.com/invite/x52Xz3…

Seattle, WA
Joined January 2009
#laarc now has a "suggest title" link on the submission page. Submit a story! laarc.io/ Thanks to the lobste.rs/ crew for suggesting this.
2
0
0
19
Shawn Presser retweeted
Doubled the number of caterpillar's legs, now it's lightning fast
7
3
1
48
917
Show this thread
Shawn Presser retweeted
Give a worm some tiny legs and it becomes...a caterpillar?
5
3
0
48
906
No context GAN should really be a thing. Although I suppose they're all no context...
0
0
0
8
A paper was rejected with the justification: "Training on data is the wrong way to instill knowledge in an algorithm." I find this hilarious for some reason. Perhaps next they'll reject a paper with "Multiple layers are the wrong way to organize an ML model."
I second to Roger that this is the strangest feedback I have ever got for any paper submission. While the four reviewers were very constructive and we added several new experiments to support our hypothesis, the meta-review was just a dismissive sentence.
3
1
0
30
Some BigGAN updates (1.5 million steps). I should really measure the FID... Also lol at the zombie in a tuxedo.
2
1
0
25
Shawn Presser retweeted
i like how cyclegan's "horse2zebra" model turns the grass brown because all its training images of zebras were taken in the savannah
3
9
1
111
GIF
Anyone who feels happy that 70TB of Parler user data was leaked: you’re cheering a criminal committing a crime because you happen to dislike the target. It’s breathtaking seeing how many people are legitimately two-faced and unprincipled.
20
7
1
271
That feeling when you and gwern have been working on this exact thing for over a year, and then OpenAI just drops it in everyone’s lap. Oh well. I was always doing it more for the challenge than the result.
colab.research.google.com/dr… I'm excited to finally share the Colab notebook for generating images from text using the SIREN and CLIP architecture and models. Have fun, and please share what you create!
Show this thread
1
0
0
38
We now live in a world where we’re trying to figure out just how smart a model is by tricking it with variations on language. And it works. Maybe ML engineers will primarily be ML prompt engineers. Interesting future.
“A photo of a person, {n} years old.”. Better, ~9 years overestimation but still not great.
2
1
0
27
Shawn Presser retweeted
Here's a video of a conditional model I trained last March. The text on the bottom is encoded into a vector and fed into StyleGAN's mapping network. The right side is a heatmap of the dlatents. I guess I can say I was working on text-to-image synthesis before it was cool.😉
1
19
0
110
1,884
Show this thread
Haha. I checked on my BigGAN-Deep run and found a bird-camera. It's a bird with the head of a camera, in the lower right. Also the model is getting pretty good.
3
0
0
23
The doggos are nice. And in general the whole model is sharpening up. I wonder what the FID is... I'll have to measure sometime
1
0
0
6
How it feels to work with GANs for a year
1
0
0
14
Since the world's melting around us, let's do something fun. Why not openly post where you stand about the most controversial topics? politicalcompass.org/test (I think one form of privilege is that I feel no hesitation about showing this publicly. It's not so easy for others.)
This tweet is unavailable
9
0
0
10
Pretty accurate test though, apparently. It's right where I would've pegged myself.
2
0
0
5
When I try to bring up these points, people say "Oh, look at you, casting stones." But I'm not casting stones. I'm trying to point out that if we consider ourselves scientists, we should not claim we've done something we haven't, or to mislead those without technical knowledge.
1
0
0
5
Now, all that said, I look forward to both lucid and Eleuther proving me wrong by having a working, trained, *useful* model, within "a couple weeks or so." I regret calling out this sort of thing, but my advice to people looking for code: Wait. And be skeptical of claims.
1
0
0
2
The reason I liked @NPCollapse's original GPT-2 replication saga was because he was extremely straightforward: "No, this doesn't work. Yes, I'm trying. Here's what I see as the hurdles. I don't know if it will work." He wasn't trying to snag github stars or land-grab mindshare.
0
0
0
7
Show this thread