Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The examples on GitHub are so much better than the examples here. I wonder if the authors are cherry picking or we just don't know how to get good results.

Why are people downvoting this? The examples on GitHub are clearly doing the action described (unlike the dachsund above) and don't have the wild deformations that the "Spongebob hugging a cactus" or "Will Smith eating spaghetti" animations have.



In my experience AI generative processes always involve a lot of cherry picking, rarely have I generated 1 image or 1 text and it was perfect or very representative of what can be attained with some refining.

Seems fair to choose good results when you try to demonstrate what the software "can" do. Maybe a mention of the process often being iterative would be the most honest, but I think anyone familiar with these tools assume this by now.


This is, unfortunately, standard practice on generative papers. Best is to show cherry picked and random selection. But if you don't cherry pick you don't get published. Even having random samples can put you in a tough position.

To be fair, metrics for vision are limited and many don't know how to correctly inturpret them. You both want to know what's the best images the network can produce a well as the average. Reviewers, unfortunately, just want to reject (no incentive you accept). That said, it can still be a useful tool because you know the best (or near) that the model can produce. This is also why platforms like HuggingFace are so critical.

I do think we should ask more of the research community but just know that this is in fact normal and even standard. Most research judge by things like citations and just reading the works themselves. The more hype in a research area the more noisy the signal for good research. (This is good research imo fwiw)


They're probably cherry picking. I know on ModelScope text2video I have to generate 10 clips per prompt to get 2 or 3 usable clips and out of that will come one really good one. But it's fast enough to do that I just sort of accept that as the workflow and generate in batches of 10. I assume it's likely the same for this.

Can't wait to try it though.


it only matters to the inpatient among us, give it a few weeks and things will only get better. The pace of ai innovation is frightening but exciting too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: