It's perfectly reasonable to release a publicly accessible paper while keeping the code to yourself, especially if you're Meta or OpenAI and wish to commercialize it at some point.
You can recreate things from papers fine. I've done it for several projects, it's often nicer than just copy-pasting in code and it fixes issues where one side is uisng Montreal's AI toolkit and another is using pytorch and one other is using keras.
Although for a tool like this, they clearly used pre-trained models as a large component, ones with publicly accessible weights as well. So replicating it will probably happen in the coming months if Meta doesn't (understandably) release the code they very clearly plan to use for their own Metaverse product.
Sure, it's perfectly reasonable to release such a paper as PR. I don't think it's perfectly reasonable for any academic journal to accept it. Leaving the code out of a paper about claims regarding the code is like leaving the experiment design out of a material science paper.
In addition it's worth noting that Meta is generally good at releasing source code.
Often there's a paper deadline and the code still needs tidying up, or the same codebase supports additional models that are published in additional papers.
Keep an eye on the facebookreaseach GitHub for this in the next few months.
Code is nice, but a paper should be written sufficiently well that it gets the ideas across such that the solution can be replicated. The ideas are the point, not the implementation.
What's the point then?