FDM 3d printing is still a wild-west and there are plenty of avenues to explore. Not sure what else to say about that other than as someone with daily and close personal proximity to the 'industry' that cropped up I am well aware that there is plenty of work to be done by enthusiasts and niche-people.
Engineering and machinery is still a place full of exploration if you have the chops. If you don't have them yet then there is plenty of topics within that domain to explore; you'll never run out of things to learn there.
My 0.02c : learn to disregard the crowds and focus on your own work. Just because people are doing something you used to do doesn't mean they have anywhere near the depth of understanding and 'freedom of movement' as you do as a 'resident expert'.
also : the fact that no one is doing something may be a signal; crowds form for a reason. Very few hobbyist bomb-squad folk and rabid-racoon-caregivers, get what I mean?
the GPT3 models didn't keep you from learning about ML. The industry didn't push you from keyboard and printers. You did these things.
If you're trying to lead an entirely one-off human life with total uniqueness from other people then all I could suggest is hallucinogens , but personally I think that the goal of just being unique for the sake of being unique is ludicrous.
Just find enjoyment, that's the goal for me at least.
I knew a guy who found an interesting 3d printing niche: 2 way radios for professionals (mainly SAR crews) are always getting fetched up on clothing, and you're often finding the radio turned off because the knobs got moved. Dumb problem, should have been solved by fundamental engineering years ago - but whatever. He built a 3d printed shroud for a variety of popular radios, and now makes a living selling these.
He's a tech guy, but no engineer. He saw the need (he works on a SAR team), saw the solution and made it happen. Inspiring, really.
I do a bit of 3d printing stuff myself. Personally, I'm attracted that it's getting more professional. I can use it as the impetus to learn real engineering/CAD, etc. Not in an "I'm an engineer" way, but still using real principles to make better things. You don't have to be intimidated if you keep your identity small and let it inspire you instead.
yeah, the metaverse got abandoned. Also: Meta was the only one to try the concept for the past X-umpteen years even though everyone in the industry ga-gas over virtual reality worlds and workplaces at every opportunity. It's literally Meta and Linden Labs (which has been on life support for 10+ years.)
The alternative is : no one does it and nothing gets abandoned, which the industry has shown itself to be exceedingly good at w.r.t VR for the past 40+ years.
To be clear: I have no faith in meta as a company; my problem lies in kicking an entity because they attempted something different.. I don't think that's productive, and it produces stuff like the past AI winters because groups get afraid of touching experimental concepts ever again lest they incur the wrath of the shareholder.
It's not the failure here or there, it's a pattern. It's not even the failing, it's the excessive hype cycle.
We keep seeing things being overhyped, with not much thought behind it. Meta is particularly bad about it. They changed their name for the hype of their VR product, when VR was still niche and had a long way to go, and still does. They couldn't even figure out legs for launch.
Now they have a 'superintellegence'? Yeah, that sounds like just the latest in a line of bullshit. Why would this be different.
>> it's really hard to sometimes break out of that loop and do manual fixes
it's not just an erosion of skills, it can also break the whole LLM toolchain flow.
Easy example: Put together some fairly complicated multi-facet program with an LLM. You'll eventually hit a bug that it needs to be coaxed into fixing. In the middle of this bug-fixing conversation go and ahead and fire an editor up and flip a true/false or change a value.
Half the time it'll go un-noticed. The other half of the time the LLM will do a git diff and see those values changed. It will then proceed to go on a tangent auditing code for specific methods or reasons that would have autonomously flipped those values.
This creates a behavior where you not only have to flip the value, the next prompt to the LLM has to be "I just flipped Y value.." in order to prevent the tangent that it (quite rightfully in most cases) goes off on when it sees a mysteriously changed value.
so you either lean in and tell the llm "flip this value", or you flip the value yourself and then explain. It takes more tokens to explain, in most cases, so you generally eat the time and let the LLM sort it.
so yeah, skill erosion, but it's also just a point of technical friction right now that'll improve.
This was a great comment. I don't know if it's common knowledge, but this really helped clarify how the shift happens.
I also remember half coding and half prompting a few months back, only to be frustrated when my manual changes started to confuse the LLM. Eventually you either have to make every change through prompting, or be ok with throwing away an existing session and add back in the relevant context in a fresh one.
I'm not yet at the point where I'm comfortable with just vibe coding slop and committing to source control. I'm always going in and correcting things the LLM does wrong, and it really sucks to have to keep a mental list of all the changes you made, just so you can tell your Eager Electronic Intern that you made them deliberately and to not undo them or agonize over them.
Microsoft has zero goodwill from me with regards to software quality or pro-consumer business moves.
I will forever assume that microsoft product is 1) technically broken 2) trying to upsell and 3) probably required by some esoteric software suite, which is the only reason a sane human would put up with it.
nuclear submarines are first and foremost built as a protective sarcophagus for the powerplant, and that's on top of submarines being designed to compartmentalize damage, anyway.
i.e. if it could totally destroy itself with a full payload that'd be a very bad design choice, not that there wasn't plenty of bad choices wrt the kursk.
kinda complicated though when you consider it fully. Power consumption only measures the environmental impact really, we come up with more clever ways to use the same amount of power daily.
it's kind of like an electrical motor that exists before the strong understanding of lorentz/ohm's law. We don't really know how inefficient the thing is because we don't really know where the ceiling is aside from some loosey theoretical computational efficiency concepts that don't strongly apply to practical LLMs.
to be clear, I don't disagree that it's the limiting factor, just that 'limits' is nuanced here between effort/ability and raw power use.
people that 'violate the rules of good code' when vibe-coding are largely people that don't know the rules of good code to begin with.
want code that isn't shit? embrace a coding paradigm and stick to it without flip-flopping and sticking your toe into every pond, use a good vcs, and embrace modularity and decomposability.
the same rules when 'writing real code'.
9/10 times when I see an out-of-control vibe coded project it sorta-kinda started as OOP before sorta-kinda trying to be functional and so on. You can literally see the trends change mid-code. That would produce shit regardless of what mechanism used such methods, human/llm/alien/otherwise.
I used Sonnet egpu box on a similarly equipped Dell XPS and it had so many little issues that it sold me off of eGPUs over Thunderbolt entirely.
Sleep broke across all OSs, if sleep didn't break the GPU wouldn't get powered on with the laptop. If one side lost power during an outage (the gpu side, the laptop has a battery..) it would require an elaborate voodoo ritual of cycling both of them on and off until they 'caught' each other. It would cause the rest of the USB ports on the laptop to reset and drop comms with peripherals once or twice a week, necessitating a rain-dance restart.
when Oculink first started showing up I gave up all together and just said "fuck it i'll try it again in a few years.".
It worked fine when it worked fine, but the patches in between were not worth my time.
I blame Dell and their thunderbolt controllers entirely for the issue, but it left such a bad taste in my mouth that I would have a really tough time buying the newest Sonnet box to try it out. Now I have a desktop machine and don't fall into that market.
I ended up throwing that card (an rtx 3xxx) into a dell rackmount and have been happy with that card ever since.
to your point though: the non proprietary PSU was a nice feature, but in reality the expansion card for PCI->Thunderbolt or whichever interface you're using can be bought on alibaba for like 20-30 bucks and the PSU is worth another 30-40 bucks , a generic white-label 650w. I think if I did it over i'd just do that and make an enclosure, but the Sonnet boxes aren't too bad a value by the numbers.
I haven't seen that done yet. I think it's one of those Dryland myths.
reply