I shed an actual tear. I dreamed of days like this. I got close, building a small language for generating generic music, but with decay, sawtooth and stuff? It's a functional DAW.
I made Lambda Musika[0][1] a long time ago and its elevator pitch is literally "Lambda Musika, the functional DAW" (as in functional programming).
Check the teal button at the bottom for other examples!
I don't use it that much anymore (Strudel's language is truly expressive) but I still reach for it when I want to do sound design, since Strudel is more like a sequencer (where Lambda Musika lacks).
That's absolutely sick. I love seeing a full arrangement like this as opposed to destructive live coding--that's cool too, but I don't really vibe with it as a workflow. Definitely taking some inspiration from this.
I found that annoying on the editor, but if used on a 2nd screen to build graphics programmatically (fractals, etc), or via an external port to drive RGB LEDs arrays or matrices, results could be spectacular. Imagine fractals driven by music or a giant spectrum analyzer made of LED strips.
I recently bodged together a board that would drive FastLED programs parameterized by the control voltages that come off a eurorack, it was really neat and straightforward because you have some really good clock sources to sync to
Nice. They’ve got a very good reason to keep the model as closed as possible. The second you make it open, it just becomes the fitness signal for the next batch of deepfakes
Make sure it works (most of the time) lock it down behind an well-guarded API and charge a lot of money :)
nice launch. the "pick up the same claude code session on your phone bit" resonates. I’ve wanted that too, but with self-hosting and a few creature comforts.
I hacked a tiny web client around the claude CLI that I use daily:
Nice, but when I try to generate photos, it still looks more like digital art than anything else, especially faces. Was this done intentionally by OpenAI?
Yeah I liked one of its suggestions for a sailboat with rocky forested coastline in the background, so asked it to refine it to look photographic. But the result still looked more like an illustration.
> When training a 65B-parameter model, our code processes around 380 tokens/sec/GPU on 2048 A100 GPU with 80GB of RAM.[1]
Note that you probably need to budget for double to triple that because things go wrong and it usually takes multiple starts to get a good training run.
it pains me to see AMD just sitting on their asses through this incredible development of AI & possibly AGI. if they still cant get their shit together then they should spin-off the discrete gpu division into something purely compute focused. I believe now there is enough momentum in the AI/ML space to fully develop innovative ideas on h/w front.
Never before has society celebrated its own demise with such fervor. Brace yourselves for widespread job losses, instant fabrication of fake news, deep-fake adult content, and the destabilization of numerous markets – but hey, at least we have a shiny gadget to make our soon-to-be obsolete jobs easier!
It's unrealistic to expect our economy to handle this onslaught, and it's naive to think that tools created by ultra-capitalistic, multi-billion dollar corporations aren't designed for profit and gatekeeping. They certainly aren't crafting them to sabotage their own success.
I'm not opposed to AI, but it's crucial to consider the implications. Look into OpenAI and other organizations shaping AI development, and contemplate the impact of their innovations.