Does this generalize to higher dimensions? I’m realizing my mathematics education never really addressed alternative coordinate systems outside of 2/3 dimensions
The last method mentioned at that wolfram.com link should work for any dimension (i.e. choosing random Cartesian coordinates with a normal distribution, then normalizing the vector obtained thus to get a point on the sphere).
The method presented in the parent article is derived exactly from this method of Marsaglia, and it should also work for any dimension.
There should conceptually be something similar for higher dimensions, but I'm not sure that it only involve reasonable functions, where by reasonable functions I mean functions you are likely to find in the math library of your programming language.
Here's an outline of where the θ = 2πu, φ = acos(2v-1) where u, v are uniform random values from 0 to 1 comes from.
If you just picked a uniform random latitude and longitude for each point the distribution would not be uniform, because lines of latitude vary in length. They are smaller the farther they are away from the equator. You need to pick the longer lines of latitude more often than you pick the shorter lines of latitude. The probability density function (PDF) for picking latitude needs to be proportional to latitude length.
If you have a non-uniform distribution and you need to pick an x from it but only have a uniform random number generator there is a trick. Figure out the cumulative density function (CDF) for your PDF. Then to pick a random value use your uniform random number generator to pick a value y from 0 to 1, and find the x such that CDF(x) = y. That's your x. I.e., pick x = CDF^(-1)(y).
Latitude length is proportional to sin(φ) or cos(φ) of latitude depending on whether you are using 0 for the equator or 0 for one of the poles (I think the Mathworld article is using 0 for one of the poles), which makes the PDF proportional to sin(φ) or cos(φ). The CDF is the integral of PDF so ends up proportional to cos(φ) or sin(φ). Using the inverse of that on your uniform random numbers then gives the right distribution for you latitude lengths. Thus we have the acos() in the formula for φ. The argument is 2v-1 rather than v to cover both hemispheres.
You could do the same things in higher dimensions. For a 4D sphere, you could slice it into parallel "lines" of "hyper-latitude". Those would be 2D instead of the 1D of the lines of latitude on a 3D sphere. Then to pick a random point you would chose a line of hyper-latitude at random, and then randomly place a point on that line of hyper-latitude.
Like with the 3D sphere you would need to weigh the lines of hyper-latitude differently depending on where they are on the 4D sphere's surface. Do something similar with the PDF and CDF to use the CDF^(-1) to pick one the lines of hyper-latitude with your uniform random number generator.
You then need to place a point uniformly on that slice of hyper-latitude. That's a 2D thing in this case rather than the 1D we had to deal with before so we will need two more random numbers to place our point. I have no idea what the hell it looks like, but I'd guess it is not going to be "square", so we can't simply use two uniform random numbers directly. I suspect 2D thing would also have to sliced up in parallel lines of "sub-latitude" and we'd have to do the whole PDF and CDF and inverse CDF things there too.
I think in general for an N dimensional sphere you would end up with placing your random point involves picking 1 coordinate directly with your uniform random number generator, and the other N-2 would involve inverse CDFs to get the right distributions from the uniform random number generator.
I have no idea whatsoever if those CDFs would all be simple trig functions like we have in the 3D case, or would be more complicated and possibly not have a reasonably efficient way to compute their inverses.
If you select "All data" for their user count, you'll notice a sharp shift in the gradient of the user count about a year ago. Any idea what would cause this?
For me, it's that combined with the prominent placement of the output of answer confabulators alongside search results. Given how terrible the output was initially, and how it is still not-infrequently awful, it reminds me of when Google was in "We've desperately gotta pump up the user numbers for Google Plus or else we'll lose the Race For Social!" mode and adding it to every big thing they controlled.
I'm still mad that they took away the '+' operator for that turd of a project. [0]
[0] To be clear, it totally could have been a great project. Early on, there were signs that it was going to be -at worst- decent. But, well, Vivek Gundotra wanted the project to be a big turd, so it ended up being a big turd.
I think, they have started blending in some AI results into the main search feed. And not just for ads, it would be understandable. My personal example, I was trying to find consultants that could help with passing the Apple Store review.
These results are promising and hopefully carry over to the upcoming Strix Halo which I’m eagerly awaiting. With a rumoured 40 compute cores and performance on par with a low power (<95W) mobile RTX4070, it would make an exciting small form gaming box.
I've been super excited for Strix Halo, but I'm also nervous. Strix Halo is a multi-chip design, and I'm pretty nervous about whether AMD can pull it off in a mobile form factor, while still being a good mobile chip.
Strix Point can be brought down to 15W and still do awesome. And go up to 55W+ and be fine. Nice idles. But it's monolithic, and I'm not sure if AMD & TSMC are really making that power penalty of multichip go down enough.
Very valid concerns! AMD's current die-to-die interconnects have some pretty abysmal energy/bit. Really hope they can pull off something similar to Intel's EMIB maybe?
The rumors saying Strix Halo will be a multi-chip product are saying it's re-using the existing Zen5 CPU chiplets from the desktop and server parts and just replacing the IO die with one that has a vastly larger iGPU and double the DRAM controllers. So they might be able to save a bit of power with more advanced packaging that puts the chiplets right next to each other, but it'll still be the same IF links that are power-hungry on the desktop parts.
Me too. There's at least one manufacturer who makes pretty sweet mini-ITX motherboard with R9 7945HX, I hope they will follow up with Strix Halo once it's released.
I considered Nextflow before begrudgingly settling on snakemake for my current project. Didn't record why... possibly because snakemake was already a known quantity and I was under time pressure or because I felt the task DAG would be difficult to specify in WDL. It's certainly the most mature of the bunch.
Nobody wants to write or debug groovy, especially scientists who are used to python. It also causes havoc on a busy SLURM scheduler with its lack of array jobs (heard this is being fixed soon).
If your project depends heavily on general purpose GPU programming, you might start one in C++.
This was the case for a project I am working on that was started in the last year. The interop features in rust (and other languages) are simply not as reliable as writing kernels directly in CUDA or HIP or even DPC++. You _can_ attempt to write the GPU code in C++ and call to this from $LANG via ffi, but if you want to preserve data structures and methods to work on both the host and device, its still easier to write it once in C++.
I concur re not being able to share data structures. I've been using `cudarc` for FFI (Shaders in C++; CPU code in rust), but that would be a nice perk.
I’m really unsure why this is front page. For a hacker news audience that has little knowledge of Aotearoa New Zealand history, this is an odd first introduction that has historically been used to vilify Maori and in turn justify colonisation. If this is your first exposure to the history of Maori, please know this emphasis carries its own agenda.
I think if someone had posted some link to an uncontroversial part of nz history it would languish on the new page with 1 or two points. Things that reinforce discourses of racial tension seem to constantly get upvoted .. somehow..
You should read a little more closely before such strong condemnations.
The Julia macros @btime and the more verbose @benchmark are specially designed to benchmark code. They perform warm up iterations, then run hundreds of samples (ensuring there is no inlining) and output mean, median and std deviation.
This is all in evidence if you scroll down a bit, though I’m not sure what has been used to benchmark the Mojo code.