In this case the majority of the work was done by another company on your instruction. When you signed up was there anything in the terms that said you get ownership over the output?
All of the notable generative AI companies have policies that the won't claim copyright over your outputs.
They also frequently offer "liability shields" where their legal teams will go to bat for you if you get sued for copyright infringement based on your usage of their terms.
Another way to look at it is everything a LLM creates is a 'hallucination', some of these 'hallucinations' are more useful than others.
I do agree with the parent post. Calling them hallucinations is not an accurate way of describing what is happening and using such terms to personify these machines is a mistake.
This isn't to say the outputs aren't useful, we see that they can be very useful...when used well.
Most applications backed kdb+ do just this. It comes with its own parser and you can query tables using something like an ast.
For example the user might ask for data with the constraint
where TradingDesk=`Eq, AvgPx>500.0
which kdb+ parses into
((=;`TradingDesk;(),`Eq);(>;`AvgPx;500.0))
As a dev on the system I can then have a function which takes in this constraint and a list of clients that I want to restrict the result to. That list of clients could come from another function related to the entitlements of the user who made the request:
Then that gets passed to the function which executes the query on the table (kdb+ supports querying tables in a functional manner as well as with a structured query language) and the result has the restrictions applied.
It's really nice because, once parsed, it's list processing like in a lisp and not string processing which is a pain.
It's not because of the left of right evaluation. If the difference was that simple, most humans, let alone LLMs, wouldn't struggle with picking up q when they come from the common languages.
Usually when someone solves problems with q, they don't use the way one would for Python/Java/C/C++/C#/etc.
This is probably a poor example, if I asked someone to write a function to create an nxn identity matrix for a given number the non-q solution would probably involve some kind of nested loop that checks if i==j and assigns 1, otherwise assigns 0.
In q you'd still check equivalence, but instead of looping, you generate a list of numbers as long as the given dimension and then compare each item of the list to itself:
{x=/:x:til x}3
An LLM that's been so heavily trained on an imperative style will likely struggle to solve similar (and often more complex) problems in a standard q manner.
A human can deal with right-to-left evaluation by moving the cursor around to write in that direction. An LLM can’t do that on its own. A human given an editor that can only append would struggle too.
Might help. You could also allow it to output edits instead of just a sequence. Probably have to train it on edits to make that work well, and the training data might be tricky to obtain.
Absolutely. Also graph and timeline view are invaluable for decades old projects when you are trying to find out when and why someone did some odd change.