Unlicensed is the same as fully copyrighted. There is a presumotion of ownership. Licenses in this case serve to clarify allowable uses. Without a license, nothing is allowable as you maintain the right to do anything within a copyright holder's legal right.
I believe the heat is being transferred to the ocean water entering the desalination process, which reduces the energy required for desalination, so no transfer of heat to the ocean itself.
Isn't that a bit obvious? We all know about the tens of billions paid every year to maintain position. Obviously this wasn't in case Apple would segue to Bing.
The business case of building a search engine from scratch which robs you of 20b in yearly profit is very low.
If you’re spending more than a few months on a topic you might be doing it wrong. I find that following a slightly stressful fast-paced schedule is actually kind of important for effective learning.
I'm working through a similar course with friends. We take two hours a week, and ocasionally do homework. We're about halfway since we started early this year. I feel its fine, no need to rush. The upside of taking it slow is that it stays with you for longer, I think. If you rush it and find no immediate application afterwards, you risk forgetting everything.
Long term retention requires spaced repetition of course. But I think the stressful pace does help not get bogged down on a first pass of the material.
It's impossible to cover everything about a topic in a short amount of time, but when you try, you at least develop a map of the terrain.
Every organization differs, of course, making this difficult to usefully answer.
I have always prefered seeing senior devs as capable of architecting and implementing solutions (better yet delegating parts to juniors) such that they can launch a product themselves, where a mid level dev can launch individual features but would struggle with the complexity of choosing both an auth model and a persistence layer for it (mixing well with other persistence uses) and then what the front end should look like overall, and whether to use websockets or... and a junior can fix bugs, or work on tickets/scaffolded tasks in a productive manner.
We had seen interesting developpments around vector databases, but then people stopped hyping them as you could just save them in normal databases without real differences. I wonder what will happen when the models can freely access them though.
I really don't understand how people figure out the vectors to actually store in the databases, regardless of the underlying storage model.
Isn't that itself the province of an LLM? Say I have a bunch of text. How do I save the text search "by similarity"? Sphinx and semantic search was hard, I remember. Facebook had Faiss. And here we are supposed to just save vectors on commodity hardware BEFORE using an LLM?
1. Take a bunch of text, run it through an LLM in embedding mode. The LLM turns the text into a vector. If the text is longer than the LLM context window, chunk it.
2. Store the vector in a vector DB.
3. Use the LLM to generate a vector of your question.
4. Query the vectordb for all similar vectors (that fit in the context window)
5. Get the text from all those vectors. Concatenate the text with the question from step 3.
6. Step 5 is your prompt. The LLM can now answer your question with a collection of similar/relevant text already provided to the LLM in the context window along with your question.
You don't even need an LLM. You can use Word2Vec, or even yank the embeddings matrix from the bottom layer of an LLM. And you can use CLIP and BLIP for images and audio, respectively.
I haven’t got it to anything that appears to be posting a form or similar. The content has to be indexable on bing by the looks of things. The second it can submit forms the internet is doomed.