Hacker Newsnew | past | comments | ask | show | jobs | submit | benjaminwootton's commentslogin

They need something like this as it's hard and flaky to automate Google apps with AI. However, step 2 drops me to a fairly technical looking page where I have to configure Google Cloud. If they had a one click installer to automate Google Apps it would be an absolute killer use case for AI for me.


A demo of using an LLM/agent to analyse data stored in ClickHouse. This stuff is getting really good!


Does it require a semantic layer such as DBT, Lightdash, or Rill?


I’ve been saying the same and the same about data more generally. I don’t want to go and look, I want to be told about what I need to know about.


I wish I knew the difference. I’ve ran or been close to tens of businesses over the last 20 years and we’ve always paid the Google tax, but I’m not sure it’s ever had a positive ROI.


The AdWords platform is extremely complicated nowadays, and try as I might I can’t get any impressions from it. I then went through a period with an AdWords specialist from their team who also couldn’t get any impressions. It’s like they don’t want or need my money.


It’s yet another way for investors to screw early employees whose face doesn’t fit.


The bigger issue is that LLMs haven’t had much training on Q as there’s little publically available code. I recently had to try and hack some together and LLMs couldn’t string simple pieces of code together.

It’s a bizarre language.


I don't think that's the biggest problem. I think it's the tokenizer: it probably does a poor job with array languages.


Perhaps for array languages LLMs would do a better job running on a q/APL parse tree (produced using tree-sitter?) with the output compressed into the traditional array-language line noise just before display, outside the agentic workflow.


This is the dream, but it keeps crashing and sinking against reality. It seems intuitive that running language models on the AST should work better than running them on the source code, but as far as I'm aware every attempt to do this has resulted in much worse performance. There's so much more training data available as source code, and working in source code form gives you access to so much more outside context (comments, documentation, Stack Overflow posts), that it more than cancels out the disadvantages.


Perhaps if we also trained them on natural language ASTs at the same time when asking the questions? :)


There is some truth in this. I fit into a few of these buckets and I don’t think I could ever recommend their enterprise stuff after having my favourite consumer products pulled.


The public markets are hitting all time highs every week. It’s going to be painful if that pops.


This has pretty much ruined TV for me. I watch with a remote control and constantly turn it up and down. It breaks the immersion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: