Quick summary: Apply for up to $6K reimbursement per month for up to 6 months for Llama usage (wherever you use it). Incorporated startups with less than $10 million USD in funding are eligible. (disclosure: I work at Meta as Llama Partner Engineer)
Do I count as the founder? I use contractors from Eastern Europe to do frontend, does that count?
Overall seems like a very niche offering, considering $6K is peanuts these days. You can get more by applying for any of Microsoft, Google, or Amazon startup programs. MSFT for instance straight up gives > $100K in cloud credits when you are funded (and if not, how do you pay your developer?)
> Join us for the opportunity to receive cloud reimbursements of up to $6,000 USD per month for up to six months, technical resources, and a vibrant community
Thanks :) A/VR is not my thing. Hopefully they will do something in the future. Tertiary Education is in crisis thanks to AI, but it’s also going to supercharge it and that’s where research is needed.
If you come up with a great idea, then Facebook can either steal it from you or buy you out without them having to spend the funds to test all of the potential options.
Facebook hasn't had a good idea for over a decade (and even that one was trash), so they need a little help now and again.
Meta doesn't have a business providing straight up API usage. This is a new market, if they can get startups relying on Llama hosting, then there's no reason this can't be a whole new market.
Yes. You will just need to document how much of your OpenRouter bill was for Llama usage (like provide screenshots if its not broken down on your invoices or receipts).
Yes. That's what the "does it have to be legally incorporated" question is implicitly contrasting with, by my reading. (Not against some putative "non-legally incorporated company".)
This is really cool actually. I like how it uses the personas to query the LLMs from different angles. And 250 “queries” for free? That’s amazing value and impressive parallelism . Nice job guys!
Nope, you have control over what gets sent. Only the text after the command "please" (and previous commands and variables you've explicitly approved) are sent. You'd have to explicitly type "using my secret x". Your zshell history is not sent. Also, the only server involved is OpenAI's server (there's no Magic Shell server to fear - yet ;) )
I downloaded the mac app, configured a virtual device to send the system output to the "Krisp Speaker" and verified that it cuts most of the music out of what I'm listening to, leaving only the voice (at a some what degraded quality). I wish I could configure it _cancel_ ambient noise, not just remove it from the input signal.
It'll not be quick enough for phase cancellation, but presumably you could diff the output with the input, phase-invert it, and get the signal you want that way.
Yeah, name and other attribute lookup is feasible (and seems inevitable). Also, this prototype only claims to take a snapshot and submit it. Needs to perform the check in realtime against video input.
It is not my call, I defer entirely to The Machine. We can not understand its reasoning, only attempt to understand how it became what it is. Serious question on this topic though, does anybody know if this facial recognition library has been completely open sourced? Apologies if the answer is glaringly obvious or has already been answered in this thread.
I was one of them. We had many spirited discussions. TA was completely transparent about what was going on and we tried hard to find an angle to make it work. I look forward to future hacking with the others.
I don't know for sure (I started out as a C++ developer, but I'm a web dev now) - but I'd suspect there is a lot of work out there. A better way to test might be to go to a local developer meet up specific to your skills and ask people where they work. I'd be surprised if there wasn't plenty of work to be had.