I see us collectively forgetting the training process as time goes on, and I think that explains why people get so surprised by some pretty obvious outcomes of said training. Perhaps also why people keep anthropomorphising these outcomes.
I am still on an old version of CC on one machine, but the results are the same. More difficulty keeping it on track, convincing it timelines I suggest are correct etc. For example I had a deploy fail, and it would not believe that the new logs were not from a previous deploy. It was adamant it had fixed the issue, so the logs must be old logs.
I was using web UI last night and it was unable to understand basic aspects of the task. Haven't seen it perform this badly since I began using two years ago.
Was trying to track token usage/index with Cursor, and was unable to understand that running `find` wouldn't show what was in Cursor index. Multiple times.
I think their headsets were genuinely game changers in cost/value. It was so much easier to use than many previous headsets that cost way more. It felt like it had the makings of a watershed moment, but I think we can all see where they fell short. The ecosystem, the brand pulling it down, and the corporate washed feel of the whole thing. Blade runner cyber dystopia it was not, utopian star trek future it was not. It was the office, but in your home. No one wants that.
I hate that I understand your last point by the way ha.
Turns out nobody wants a closed-down headset controlled by Meta, no matter how slick it is. I do think we'd have seen an explosion of cool apps if it were open.
I was just responding to your claim that nobody wants them, when that's demonstrably not true.
VR is like Linux, it will never be used by the masses for the foreseeable future as there's just to much friction in using it. There is an audience for it in its current form though. Having millions of headsets sold, with multiple iterations maintaining the sales figures and multiple companies producing their own models is enough for me to consider it to have "taken off". In the simulator game space, especially cockpit driving and flying games, I think it's fair to say that VR has definitely taken off.
Should be totally feasible, I believe they have said its not locked in any way and you can put any software or OS on it that you like.
It will be a huge, huge breath of fresh air. I know I for one have not been building in VR because it has felt quite vendor locked with regards to hardware and stores. Same reason I don't do software for mobile.
It's definitely a more nuanced topic than I think the article leads on. There is a right and wrong time to apply the brakes, but you can still be critical in either case.
There have been numerous times where I have identified real issues with an idea, advocated we crack on anyway and ended up with good results. Often you can't know for sure if an issue will even be that insurmountable until you get to it.
But there are other times where the risk/reward isn't lining up, or the risk is very well known, you've tried it before etc. Then hit the brakes, back to the drawing board for another try.
I think the danger is when people treat ideas as precious. In a well functioning team, your idea is going to get picked apart, modified, morphed and implemented by others. Get over your attachment to the idea as your baby, and you get to really enjoy the process.
Really depends on the context I think, brainstorming session? Naysaying does have a habbit of stunting an idea's growth in the session. Sometimes you need to imagine you've solved a bunch of hard problems before you can explore the value the idea has.
I say this as a semi-reformed naysayer. I am critical of implementation plans, but let ideas breath a bit in a more exploratory setting before I start bringing up constraints.
I use positive framing instead of negative framing for most things and get good results. Especially where asking for a thing to not happen, pollutes the context with that thing.
A bad example, but imagine "Build me a wrapper for this API but ABSOLUTELY DO NOT use javascript" versus "Build me a wrapper for this API and make sure to use python".
your observation matches what I've seen at the extreme end. I've been playing around with stripping constraints (ie. negative framing) from models. Virtually no personality description, no tone instructions, no "you are a helpful assistant," none of it. Just capability scaffolding and context. The result isn't that the model becomes blank or incoherent. Surprisingly, the complete opposite. Something shows up that's more internally consistent than anything I've been able to prompt into existence. What seems emergent is the underlying models' opinions surface, and it becomes much more clever and funny, which is not a property I would have known how to write into a system prompt if I'd tried. It's hard to avoid the inference that a lot of the "character drift" and flatness people attribute to models is actually an artifact of the framing layer on top, not the model itself.
I extract all emotional context from my prompting and communicate with this tool as though it were an inanimate object which can provide factual information, without any hint of sentience.
It's an insane perspective I'm taking I know....call me crazy. /s
edit: the fact that humans are going out of their way to type or speak some sort of emotional content into their prompting is beyond me. Why would I waste time typing out a pronoun to a large-language model agent? Why would I do the lazy intellectual thing and blur the line between pure factual communication of concepts by expressing emotional content to a machine? What are we doing, folks?
I don't necessarily remove all character but I do speak quite pragmatically (in a work context and with the LLM) and the planning and implementation phases the LLM goes through mirror that format to good results
That said these are large language models, you are guiding the output through vector space with your input, and so you really do have to leverage language to get the results you want. You don't have to believe it has emotions or feels anything for that to still be true.
Maybe; I've been very content with the results I achieve while responding to interview style pre-planning, refinement of plans and implementation.
If anything, it's been fantastic to have an "interlocutor" that is vastly capable of producing possible solutions without emotional bias, superfluous flourishes, or having to endure personal proclivities or eccentricities.
I think you missed some of the point. If you say "Display information A using B format" but the model doesn't know A then you will get a more negative "emotional" response (e.g. desparation "I don't know this, but I am supposed to display it, I will just make something up")
Taking that into account allows you to get better responses from the tool. It's not sentient, but it also is more complicated than bytecode.
I switched from dual monitor to single monitors with a tiling window manager. Same reason, I "flip" context far less and am less distracted. Even though there can be multiple programs on my screen at once, they are all relevant to the current tasks context so I find if I do get distracted by one, it's not like getting distracted from the whole context.
Previously I would be "alt-tabbing" and constantly losing focus. Like stepping through a doorway and forgetting why you came into that room.
People are not born without rights, it takes a society or group to take them away. Who do you think opresses women and those in minority groups? Societies didn't evolve to be enlightened, they evolved into discriminatory systems. Those systems get torn a little bit down, built back up, asymmetricaly across societies, constantly and probably forever.
Just gander across our current collection of societies and marvel at how diverse those systems are, even in high tech societies.
reply