Hacker Newsnew | past | comments | ask | show | jobs | submit | duncanfwalker's commentslogin

It's not so valuable to assess the current state - what the impact of using AI is today. From personal experience it feels like overall impact on productivity was not positive a couple of years ago, might be positive now and will be positive in a couple of years. That means by assessing the current state of impact on product where just finding where we are on that change curve. If we accept that trend is happening then we know at some point it will (or has) pass the threshold where our companies will fall behind if they're not using it. We also know it takes a while to get up to speed and make sure we're making the most of it so the earlier we start the better. That's the counter arguement that we could wait for a later wave to jump on but that's risky and the only potential reward is a small percentage short-term productivity gain.


So you're saying instead of assessing the current capabilities of the technology, we should imagine its future capabilities, "accept" that they will surely be achieved and then assess those?


I would assess the directionality and rate of the trend. If it's getting better fast and we don't see a limit to that trend then it will eventually pass whatever threshold we set for adoption.


Spending is paid for out of tax and all those people will have paid tax. Paying less tax than someone else doesn't make a free rider. Deliberately opting out of an obligation while taking from the group makes a free rider.


I guess it's more clear that it should be a to statically readable value? eg you shouldn't do things like use arguments to build the str


I would def use this if there was a return “select …” option. There are heaps of scenarios where sql is modified based on parameters. If no doc string just use the return value maybe?

Our queries are typically large, not 3-5 liners.

(Filter view queries where you might add additional CTA’s to provide the necessary filter conditions, but aren’t desirable if particular filter parameter is nill, etc.)


Hello, author here. It is actually possible to use return instead with a different set of decorators: check out the first "tip" block on this page: https://hyperflask.github.io/sqlorm/sql-functions/


Just keep in mind best practice is to use the built-in parameter interpolation that comes with your db library, since it handles escaping SQL injection for you.

Be very careful if you ever use bare string formatting to construct your queries.


This is spot on. I'd love it if it was possible to get the integration with the QGIS ecosystem. It could open source integrations or even a commercial offering that just joins things up in a cohesive way just something that enables a more smooth collaboration model.


Any tips on smoothing the transition between the two that mean work isn't duplicated?


Cache interim data. Use QGIS for exploration.


That's an insightful nuance. I've seen you just create divisions in organisations because while it is a really fully featured desktop application, it implies a way of working that doesn't play well with the cloud, which creates barriers between experimenting and production.


As other comments have said, I'd prefer other solutions to get by all the tests to run faster. It would be interesting to see if it could be used to prioritise tests - get the tests more likely to fail to run sooner.


Oh god, this has just made me reflect that we're in the golden age of generative AI - not in technology terms, in user experience terms. We're in the period where the major products are competing against each other before they switch into enshitfication mode. You're certainly right, there's going to be ads in the answers and probably worse. I'm imagining companies paying to introduce ideas as subtle subtexts to millions of unrelated answers or platforms deliberately engineering the ux to maximise understanding of our drives and preferences purely so it can be sold.


You better believe I’m going full self-hosted AI the moment I get a whiff of sponsored content in an AI response I paid for.


Some decent tech billionaire should do what Andrew Carnegie did for libraries but for AI. Would be really cool if they tied it to local libraries.


Of course. Were in the burning VCs cash by the truckload phase. And inference isn't getting cheaper. I'd argue this is the worst its ever been in terms of over extending a business and they might not be able to enshitificate fast enough.


Inference will get much much cheaper. We're paying $30000 for a top of the line gpu right now, but that's only because everyone insists on buying nvidia, so nvidia has full incentive to charge the absolute maximum.

Long term that vendor lock in will go away and prices will go down to something reasonable.

Long ago CPUs were super expensive too, now they're so cheap we put them in toothbrushes


> Long ago CPUs were super expensive too, now they're so cheap we put them in toothbrushes

Long ago GPUs were already affordable. We used to buy them to play _games_!


A Voodoo2 12MB was only $600 in today's money!


I wonder whether we'll be able to look back on this period in 10 years time and save definitively whether the wide spectrum of responses to LLMs was perception or real feature of our differing jobs.


At the start of your comment I thought the 'issues like this' were going to be the 4 year discussions about what is and isn't core.


So did I :-) but I think the concepts are related: Linus’ ability to shift into autocratic leadership mode when necessary seems to prevent issues like the 4 year indecisiveness on v2/core from compromising product quality to the point where Linux is trusted in a way that rivals commercial software.


+1 you're paying for the governance as much as you're paying for the code.


Well said, thank you


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: