Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
That's pretty much what they said about photographs at first. I don't think you'll find a lot of people who argue that there's no art in photography now.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
I would argue that with photography, only the tooling changed but the craft remained, while with AI the craft completely transforms in something else, in which I'm just not interested in.
We're working on something like that using Unity[1]. It's nice, because contrarily to the commercial platforms, we don't need to support low-end devices like the switch. And yes, the DXR features will make a difference[2] (sorry, I don't have a more recent video).
Concerning physics, we're using VPX's engine, which is very well tuned to pinball. Not sure if the breaking the glass is going to be a thing, but PR welcome if you think so. ;)
If you're taking requests for future pinball machines, I'd love to see the 1975 Gottlieb Soccer machine. My parents had one in our basement for most of my childhood. It was broken half the time, and was hard to find a pinball mechanic to fix it, but lots of great memories.
A long shot, I'm sure, I just never really see this one get any love anymore. Volley seems pretty similar though, and looks like they're only a year apart from each other.
BTW, folks like phreezie that work on the engine and the like are often not the folks that develop the tables. A lot of work goes into each side of it and they are doing it in their free time.
Yes, Volley was a nice project because one of the dev team has one at home, so we got ultra detailed assets. It was also a great use case to test Visual Scripting.
But before we dive into more table builds, we'll be focusing on the engine work.
For sure! Looking forward to you getting back to VPE now that the whole DMD thing is sorted thanks to your efforts. Do you have a rough idea of what things you'll be focusing on next? I know there was mention of a dedicated player a while back...
There's actually a pretty active part of the virtual pinball community that recreates old EM machines (Electro-Mechanical). Check out www.VPForums.com and maybe post a request there.
When authoring a virtual version of these older tables one of the challenges is getting a decent scan of the bare playfield. Fortunately, there are a lot of EM collectors who acquire and restore physical machines and they tend to be quite supportive of virtual preservation efforts - so they can be a great source for playfield scans (since they take it all apart when restoring anyway). Finding a hi-res scan or a potentially willing source for a scan will put you a good ways toward getting your virtual table made.
Also keep in mind that a virtual version is usually close but sometimes the tables have particular quirks. For example I had a stars machine. I could back the ball up into the side chute with a flipper flick for some easy points. But virtually that does not work as it relies on the elasticity of the rubber band and the weight of the ball distorting it a bit on the flipper.
Saw one guy who made VR version of these and made a physical stand which was just the plunger and buttons for him to hold onto. In VR it looked like he was playing a full thing. Tactically it felt that way too. Almost dumped 3k into that mess when I saw that before I came to my senses.
There are also large collections of them out there you can get. Most of the popular tables are recreated in some way. But in varying conditions of 'done'.
For a physical build, just curious if you've thought about integrating something like OpenTrack to track head/ eye position of the player and modify the camera position - to make the table look even more real when it's in a physical cabinet?
Yeah, I used to work on the Kinect which is why my mind went there - but the v1 certainly wouldn’t be up for this.
Fun fact, originally, the resolution of the depth sensor was 640x480 but was nerfed in firmware to 320x240. Why?
The makers of Rock Band wanted to make a Kinect version. But, with a Rock Band mic, bass, two guitars, and keyboard plus Kinect, the Xbox 360’s USB bus couldn’t handle it. So the Kinect got nerfed.
The company behind Rock Band either shut down or went bankrupt before the Kinect went on sale. At that point, way too much of the tooling (not to mention pose estimation modeling) around the Kinect had been built with the 320x240 resolution constraint so it wasn’t feasible to “unlock” the full res.
It's so sad that microsoft fucked over kinect at every conceivable opportunity. I used it a lot for 'creative coding' open frameworks and processing projects (many in that community did). Thank god they bought the original from primesense and didn't have the ability, or didn't exercise the ability, to break their multi-platform sdk.
When the Kinect 2 sdk was windows only it was a huge turd in the punchbowl and a clear sign that microsoft was not serious about making it a real tool to do real work with. I did do one project with the kinect 2 and learned just enough of the sdk to write a shim in c# to run the camera and pipe the data out over the network to a box that was actually doing the rendering.
The kinect 2 also was excessively picky about it's usb3 port, I remember going through about half a dozen usb3 cards until I found one that worked.
But at the end of the day, this is a hobby, and Unity is an awesome engine to work with. I'm following Godot more closely than ever, but too much effort to still be able to call it fun would be necessary to port this over.
If there are any pinhead game devs out there, we're still working on the next generation of Visual Pinball, called VPE[1].
However, given the current drama around Unity, we're currently looking into Godot to evaluate how much effort it would take to port, and how Godot will support our needs.
I have a VPX cab that but I've been out of the loop for a while because it just works. A refresh of my cab is on the agenda, though, and VPE looks great!
Are you targeting Linux compatibility, by chance? My biggest complaint about the whole VPX scene is that it's locked to Windows.
And it runs pretty great! I just found out about it the other day and was impressed how well it worked.
By the way prheezie, thanks for all the work you put into VPX. I hope to see VPE get some momentum behind it once again now that your DMD stuff is finished off. What do you think the likelihood is of moving to Godot, and what would you estimate as the impact on VPE?
TBH I just started looking into it, together with a few other developers from the community. I'll post something at the usual place as soon as we know more.
It depends on the effort you want to put in, and the tables you want to play.
VPX has the largest table collection, and the good ones play very well. But there is also Pinball FX which is a lot easier to set up, and they are catching up in terms of cabinet support (we're helping them, the next thing is force feedback support).
There is also Future Pinball which runs a few original games with mods pretty well now, but it's still closed source.
You’ll probably end up primarily running Visual Pinball. Some of emulations are nearly perfect if you disregard nudging; I’ve never been able to get convincing nudge physics.
Pinball FX is pretty fun too, but it definitely feels more like a video game than VPX.
Yes. I don't even bother with anything else on my cab anymore. Good VPX tables are the best looking and best playing and there are a ton of them now with more being made by the community all the time.
Reading the presentation https://matthias-research.github.io/pages/tenMinutePhysics/0... , am I right in thinking that the vanilla Position-Based Dynamics is literally just how Quake does collision and then clip velocity against geometry? And the extension is simply to add a softness parameter that interacts in elastic-energy space?
So basically going from reality to PBS there goes three layers of simplification:
First Level: assume that the discrepancy between inertial and constrained position is caused by a constant force. Guess a force vector, simulate a constant-force scenario in a vacuum and fine tune your guess until the final position makes sense, to get the accompanying velocity
Second Level: actually, we don't really care about the velocity progression within the timestep, so we instead just assume that the discrepancy is caused by a velocity boost at the end of the last frame. Guess an impulse vector, simulate a constant-velocity scenario in a vacuum and fine tune your guess until the final position makes sense, to get the accompanying velocity
Third Level: actually, we don't really even care about the position progression within the timestep, so we instead just snap the position to the closest point that makes sense, and just say that your object went a bee-line from the last position to this position and your velocity is just the displacement over time.
I'm not familiar with the implementation of Quake's physics, but my recollection of the PBD paper was that it was basically a "mathing up" of what was already a fairly common way of handling physics in games. Thomas Jakobsen wrote a very influential paper in 2001 about the character physics in the original Hitman that popularized a lot of the same ideas later presented in the PBD paper.
What is really interesting to me is that later on in 2016 Miles Macklin et al. from Nvidia released the Extended Position Based Dynamics paper (the XPBD referenced in the article), which bridged the gap between hacky-gamey PBD and a principled fully physics-based derivation. The physical derivation was explored and refined further in Primal/Dual Descent Methods for Dynamics.
And finally most interesting was the Small Steps in Physics Simulation paper by the same Nvidia group that showed that a simplified variation of XPBD that got rid of iterative solving in order to increase the physics sim framerate is actually a state of the art dynamics solver. As in, many dynamics problems are currently solved most accurately/efficiently using this overgrown hacky algorithm game programmers came up with to make dragging around corpses look better.
Kind of parallels the whole graphics cards for gamers morphing into GPUs for AI transition, just in a more niche way.
The two key insights that drive the game-physics approach of PDB(which follow decades of spaghetti-on-the-wall) essentially come down to: choosing a source of error that can be controlled, and not throwing away information too readily.
You end up using position because you can then solve for "only give me an answer with a valid position" - addressing it through motion makes it an indirect process, and then errors become subject to positive feedback loops. This biases PDB towards losing energy, but that's desirable for stability, and XPDB reduces the margin of error.
You avoid throwing away information by being cautious about when you "go forward" with a solution to the next timestep, and possibly keeping multiple solution sets alive to pick them heuristically. This is something you can do extensively when you are aiming for simple physics with abstract dynamics(platforming games, fighting games, etc.) - you know what kinds of solutions will "look right" already, therefore test all of them, make a ranking, backtrack as needed. When realism is needed the principle still works - you can still rank solutions by making up a metric - it's just made more complicated by the number of answers you get with complex dynamics. That explains why XPDB moves away from "substepping" the physics: it's more important to "go wide" and scan for a single, high-quality solution than to try to linearize each aspect and hope that using smaller steps will reduce the error for you, which was a common approach for abstract dynamics and resulted in biasing like "x axis movement is favored over y". The secret sauce in XPDB's design is in getting the desired qualities in a more analytic fashion, without so much brute force computation.
> That explains why XPDB moves away from "substepping" the physics
Interestingly, XPBD has moved back to substepping! The relatively recent "Small Steps in Physics Simulation" from Nvidia goes into it, but I can outline the reasoning briefly.
In a physics simulation, there are 2 main sources of error, the integrator and the solver. Breaking that down a bit:
The integrator is an algorithm to numerically integrate the equations of motion. Some possibly familiar integrators are Euler, Verlet and Runge-Kutta. Euler is a simple integrator which has a relatively high error (the error scales linear with timestep size). The most common version of Runge-Kutta is more complex, but scales error with the 4th power of the timestep.
The solver comes into play because the most stable flavors of integrator (so-called implicit or backwards integrators) spit out a nonlinear system of equations you need to solve each physics frame. Solving a nonlinear system to high accuracy is a difficult iterative process with its own zoo of algorithms.
XPBD uses an implicit Euler-esque integrator and a simple, but relatively inefficient, Projected Gauss-Seidel solver. For most games, the linear error from the integrator is ugly but acceptable when running at 60 or even 30 frames a second. Unfortunately, for the solver, you have to spend quite a bit of time iterating to get that error low enough. The big insight from the "Small Steps" paper is that the difficulty of the nonlinear equations spat out by the integrator scales with the square of timestep (more or less -- nonlinear analysis is complicated). So if you double your physics framerate, you only have to spend a quarter of the time per frame in the solver! It turns out generally the best thing to do is actually run a single measly iteration of the solver each physics frame, and just fill your performance budget by increasing your physics frames-per-second. This ends up reducing both integrator and solver errors at no extra cost.
Hey freezy! I'm amazed at what you've been able to build with VPE and appreciate the consistent push to bring Visual Pinball into a modern era. Seriously great job with untangling that legacy codebase and keeping the good bits (physics, existing tables), and replacing everything else.
I also appreciate how you interact with the community. Every time I see a new post from someone announcing a new project to replace VP, you're there to gently point out all of the details they haven't encountered yet and why VPE is solving them a certain way already. Nice!
I do think there's a big need for vpx-js to make visual pinball more shareable in a browser, even if Unity offers a webgl export. Hopefully I can help with that soon!
I still want to bring vpx-js to a level where people can actually play a WPC table in the browser. That'd be really awesome and hasn't been done yet, at least not that I know of.
Shameless plug, we're working on a next-gen pinball simulator called Visual Pinball Engine[1]. It's free and open source, using Unity, and will eventually support importing FP's table format.
It's still WIP, but we've put a lot of focus on tooling, so anyone can easily create tables.
This is awesome! Do you know if any of these simulators emulate wear and tear on the table? I feel like the lack of variability in the table is why they've all felt like playing in the uncanny valley to me.
Thanks! :) What we're doing is adding adjustable randomness to the collision resolvers, which results in more or less control over where the ball goes. You can also adjust many parameters of the flipper bats, e.g. you could make the coil less strong, which would be the case for many worn machines. Finally, as you can see in the Volley video, texturing is a big part of adding visual wear, and it makes the whole thing a lot less uncanny.
Yes, definitely. I did some tests with my headset, and while a few usability and performance issues need to be resolved, it looks incredible, specially the reflections.
I think what you're missing is that this discussion is not about the legal consequences of these individuals, but about ethical decisions that will have a negative impact on the ecosystem as a whole.
Tbh I don't see an ecosystem here, there are some dots which are connected but seems like people are thinking there is a liable vendor polishing npm packages..
Also I'm not sure which one is more unethical: Malware from a random developer or profiting over his/her "free code"
* by not giving any care about open source or sustainability of it at all.
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
reply