Apologies for potentially asking a really silly question but:
Cesium plugin is free and open source, however the Ion service (the actual data) is paid.
So of what use is the plugin without the streamed data? Can we choose to patch into Bing photogrammetry data as an alternative?
Yes, you can load practically any dataset into Cesium, so you are not locked into their paid service and can self host. You might have to do a format conversion. I do think Ion is a nice service, and 3D globe data is going to be expensive no matter who you license it from.
I'm using the web version of Cesium in https://james.darpinian.com/satellites/ and I'm self hosting everything including the globe imagery, for free. (I don't need high resolution imagery for the whole globe because I only show a zoomed out view, but you could store and serve it all yourself in principle).
In addition to the nice 3D graphics I'm also using a bunch of Cesium functions for the actual satellite visibility calculations. It's a great library.
Wow, I love your site! It makes such a big difference when you give the situated view of what the object in the sky will look like from my point of view, using Street View imagery. Totally raises the bar for any site giving you visibility predictions.
I'm sure that required a lot of fiddly details to get right, but it pays off hugely. And I can imagine it's the kind of thing that lots of people wouldn't even think of as important. Well done!
Absolutely.
They just make a huge point about the plugin being open source, how that's "how business should be done", I thought there would be more to it, some useful tech beyond the same-old vendor lock-in or anything.
As a developer, having an open source client side is hugely beneficial. With closed source mapping libs you often know what is wrong with the providers library from your stack traces, but have no way to fix it. Open source means trivial bugs handling things like nulls and deallocation and even more serious bugs like CPU use regressions (which are easy to introduce with an incorrect condition somewhere and can slip through the cracks until end user's batteries are dying) become trivial fixes instead of: notify provider, provider may or may not road map a fix, provider may or may not provide the fix weeks later in their next closed source SDK update.
I haven’t looked deeply at this, but sophisticated level streaming for large areas isn’t something Unreal supports out of the box. Very cool of them to open source this.
> sophisticated level streaming for large areas isn’t something Unreal supports out of the box
Yes it is. I don't think this is replacing unreal's level streaming tech either. I'm reading this as a tool to generate content within unreal's existing landscape tiling/streaming tools.
From the demo video this looks really interesting for games/use cases that are centered around the >1000ft view.
Even in the video showing off the technology things look downright terrible up close. If you pause at the very beginning you can see handrails on the stairs that are just odd floating blobs, cars have all melted into the ground.
It is incredibly impressive that they have managed to get aerial LIDAR into a game engine, but the visuals are quite jarring up close.
This is due to the method of data acquisition. My company also uses cesium (in the browser) for visualization at sub cm level for inspection. Our quality bar is that inspectors have to be able to inspect the grain of the grout between the bricks.
We acquire the data in 3 modes for high buildings, we do low altitude fly over with the 2m drone, then we do facade with the regular drone, and finally we shoot from the ground. We only do ground based lidar, but aerial lidar is improving so maybe that's next. The rest is all high detail photos fed into photogrammetry.
The final results look perfect in cesium, some thin rails might not show up, and windows get a bit funky, but the geometry and the textures load in without a hitch, really cool technology that's so simple in the basis that we built an alternative viewer in Rust+WASM in 3 weeks.
Could you use a technique like this to do facade inspection as mandated in nyc?
I just happened to watch "how to with John Wilson" S01E02 which was talking about how scaffolding is put up everywhere in nyc because the facade needs to be inspected for loose bricks every 5 years
They've got a whole bunch of features in their viewer, making a feature matching viewer would take years. The basic technology is a tree of JSON files describing what models or other JSON files to load for regions of space. So the nodes are JSON files and the leaves are 3d models basically. There's a simple algorithm that you give your current perspective (i.e. where you are in space, and what direction you're looking in) that will walk the tree and decide what models will be in your field of view and which won't be, and at what distance (which decides the level of detail). Then the last step is simply downloading those models, while at the same time evicting irrelevant 3d models from your gpu.
They hand rolled an entire high performance 3D engine in Javascript, which is a big challenge in and of itself. We cheated by using WASM and a high performance programming language, making the 3D engine part trivial. Likewise, I'd be surprised if it took them more than a week or two to get their initial proof of concept working in Unreal Engine.
Oh and I believe their business model is/was to sell hosting and processing for the 3D models. We've been paying them a couple hundred per month just for this service.
We're using Metashape and Reality Capture. We used to use Bentley, but they've got unethical business practices so we're avoiding them now.
Metashape has really good licensing, and their software runs on Linux and is easy to integrate into processing pipelines, also they've got a big featureset.
Reality Capture was super expensive, though that has seemingly changed now that they're bought by Epic, so we might reconsider our strategy there. They make better use of hardware, making them faster, and their resulting models look great. Their licensing was so expensive that we just couldn't afford switching all of our processing to RC.
If you can get a good deal with them it's probably fine, and we only talked to nice people there. But we're still on limited runway, we can't deal with random $20k invoices, even if a nice sales rep reverts the charge when called.
I imagine this will be used for stuff outside of games primarily. My understanding is that Unreal is now used a lot in specialized domains like film-making (globe fly-bys in the news and documentaries?) and sort of general "dashboard"-y stuff.
I'm sure someone could think of an "interesting" game built off of this data but I highly doubt you go through all this engineering effort on that idea alone.
> Unreal is now used a lot in specialized domains like film-making (globe fly-bys in the news and documentaries?)
It goes further than that now. For The Mandalorian, they famously set up a "20-foot high, 270-degree semicircular LED video wall"[0] to use as a backdrop for the shots, and that wall was rendering the desired environment using Unreal Engine. No need to chroma key an alien planet in post, when you can make it a part of the scene (and now the actors can see it too as they play!). More at [1].
From what I read about it previously I think they do actually re-render the backgrounds in post
But the LED video walls serve two purposes: ensure the lighting and reflections on real objects in the scene match up with the CGI, and give the actors a visualisation to work with.
> From what I read about it previously I think they do actually re-render the backgrounds in post
All the videos and reels I've seen on this talk about "capturing the effects in-camera", so while I wouldn't be surprised if there's some integration and cleanup work to be done (particularly where the wall meets the set), it does sound like what is on the LED wall is what we see in the final footage and it's not just placeholder or pre-viz.
I think the biggest wins they get from it are that the actors can see what they're supposed to be standing in front of and that they get realistic lighting on the actors and other props from the environment
I think you're right. They use terms like "final pixel" when they talk about it, which leads me to believe they're capturing the background and not just rerendering in post.
> My understanding is that Unreal is now used a lot in specialized domains like film-making (globe fly-bys in the news and documentaries?)
It's also used to create TV studios [0]. In the UK, BBC soccer punditry programmes such as 'Match of the Day' are filmed in a green-screen studio with the pundits, desk and chairs, and a virtual studio created using Unreal Engine. Different programmes can get a totally different look by using different Unreal environments.
It can also be used as a starting point to "rough in" your layout and then clean up and refine by hand..
If you were trying to build a virtual version of a real-world location, I would imagine the time savings in starting from real landmarks and buildings would be significant.
Also good for prototyping and seeding ideas for level design, I'd imagine. I saw someone proto their run-and-gun game on google maps geo - they'll have to model everything once they lock a location down, but can test complex environments very quickly. Also having a real location, however rough, as a base gives you a lot of ideas you would have a hard time coming up with, starting with a blank slate.
It would still lend itself to some interesting games, such as Mech Battles or Warcraft with oversized vessels or hordes of leming-sized people in these environments.
It has very distinct low-res photogrammetry look yes. But if embraced and under good art direction I think it could be made to work as a stylized look. Maybe throw in some shader effects to spice it up a bit and make sure that the hand-modeled objects fit into the design, and it could be pretty cool.
I think it is a bedrock in bridging gaming and real life. From now on you can visit virtual store in real-but-in-game location and do all the purchase without leaving the game. Imagine GTA, but everything you buy is real
Can anyone comment on how good these datasets are? Are these going to be down to like, an individual tree?
I also note it mentions integrating with unreal's water engine. Unreal's water engine is excellent, but as of yet, not so great with tiled landscapes. Curious how that'll work.
Nice models of Denver on the video, especially Union Station, City & County Building, airliner flyover. But the car is driving the wrong way on Broadway, even though the parked cars on the right side are pointed in the correct direction. Maybe a new GTA Denver edition is in the works? heh.
So of what use is the plugin without the streamed data? Can we choose to patch into Bing photogrammetry data as an alternative?