Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> need dataframes and pandas? cuDF. need compression/decompression? nvcomp. need vector search? cuVS. and the list goes on and on and on.

Sure, but that doesn’t mean I’m going to pay a billion extra for my next cluster – a cluster that just does matrix multiplication and exponentiation over and over again really fast.

So I’d say CUDA is clearly a moat for the (relatively tiny) GPGPU space, but not for large scale production AI.



> Relatively tiny GPGPU space

There's a lot of other things which are very GPU parallelizable which just aren't being talked about because they're not part of the AI language model boom, but to pick a few I've seen in passing just from my (quite removed from AI) job:

- Ocean weather forecasting / modelling - Satellite imagery and remote sensing processing / pre-processing - Processing of spatial data from non-optical sensors (Lidar, sonar) - Hydrodynamic and aerodynamic turbulent flow simulation - Mechanical stress simulation

Loads of "embarrassingly parallel" stuff in the realms of industrial R&D are benefitting from the slow migration from traditional CPU-heavy compute clusters to ones with GPUs available, because even before the recent push to "decarbonise" HPC, people were seeing the increase in "work done per watt" type cost efficiency is beneficial.

Probably "relatively tiny" right now compared to the AI boom, but that stuff has been there for years and will continue to grow at a slow and steady pace, imo. Adoption of GPGPU for lots of things is probably being bolstered by the LLM bros now, to be honest.

CUDA benefits from being early to market in those areas. Mature tools, mature docs, lots of extra bolt-ons, organizational inertia "we already started this using CUDA", etc.


Sure, all that’s interesting and highly realistic. But does it make Nvidia the most valuable company in the world?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: