But there's no real rhyme or reason, it is a sort of alchemy.
Is text encoding strictly worse or is it an artifact of the implementation? And if it is strictly worse, which is probably the case, why specifically? What is actually going on here?
I can't argue that their results are not visually pleasing. But I'm not sure what one can really infer from all of this once the excitement washes over you.
Blending photos together in a scene in photoshop is not a difficult task. It is nuanced and tedious but not hard, any pixel slinger will tell you.
An app that accepts a smattering of photos and stitches them together nicely can be coded up any number of ways. This is a fantastic and time saving photoshop plugin.
But what do we have really?
"Kuala dunking basketball" needs to "understand" the separate items and select from the image library hoops and a Kuala where the angles and shadows roughly match.
Very interesting, potentially useful. But if doesn't spit up exactly what you want can't edit it further.
I think the next step has got to be that it conjures up a 3d scene in Unreal or blender so you can zoom in and around convincingly for further tweaks. Not a flat image.