Everyone dislikes pedantically verifying references. However, if you cut corners here then will you also cut corners pedantically verifying research results?
Beyond references, the point of the literature review is to ensure you have read the literature and understand it well enough to accurately summarize it. If you present a literature review, it's likely assumed you did all of this. So at the very least you should be upfront about how an LLM assisted you.
Before LLMs, I've watched in horror as colleagues immediately copy-paste-ran Stack Overflow solutions in terminal, without even reading them.
LLM agents are basically the same, except now everyone is doing it. They copy-paste-run lots of code without meaningfully reviewing it.
My fear is that some colleagues are getting more skilled at prompting but less skilled at coding and writing. And the prompting skills may not generalize much outside of certain LLMs.
Elon at some point threatened to have an LLM rewrite all of the training data to remove woke. I assume Grokipedia is his experiment at doing this (and perhaps hoping it will infect other training sets too?) ...
I highly doubt collaborators are thinking less of you for not having more publications! After all, if they are collaborators they've already decided to work with you.
My understanding is you were awarded a PhD but with the minimum number of papers? This sounds completely normal. If you are trying for faculty positions it may appear "thin", but in industry will make you stand out for many jobs. In AI, there are many researchers who don't have a PhD!
It does sound awkward if asked why you didn't write more papers, but you can just discuss some of the challenges you faced in your existing works rather than getting into personal details.
Recently about half of the items sold to me as "New" have arrived used or counterfeit. The sellers have 5-star ratings, despite numerous reviews about receiving used or fake products. Unfortunately, Amazon crosses out these negative reviews and doesn't count them toward the overall seller rating.
The hypothetical you state only matters once you have a game! The biggest risk by far is not AI assets -- it's finishing the game.
So if AI increases your odds of finishing, go for it. Then once you have a game, more people will care about whether it's good than whether you used AI assets. I suspect there will be lots of interest in how you incorporated AI, maybe even moreso than otherwise. You could alternatively use the AI assets as placeholders and intend later to replace them with hand-drawn, if desired.
When I made games, I had zero interest in making assets but wanted to understand every detail about graphics engines. I just grabbed random mediocre assets from online. I would have definitely used AI to make assets but done the coding myself.
I don't consider any of window apps light. I'd rather open a vim on WSL because anyway I always have WSL open on windows. But I welcome markdown support in fact. I can quickly jot down something that is pleasingly presentable to people, say during presentation or meeting. For bloatware perspective, I would be more worried of LLM support which I had no idea it had. I learned it from a comment.
This happens with every new tech. When websites first appeared, many businesses trusted kids to build their website. The key in applied work is to build a portfolio that shows off your abilities.
The reality is that most people who need services don't know anyone who is traditionally qualified and available. So, a portfolio may convince them to take a chance on a newcomer over an overly expensive firm (that often also just hires newcomers).
Have you tried emailing him? He likely also owns hopding.com, and both domains consistently seem to be at Squarespace. The last commit on his GitHub (Feb 2025) someone commented "Good to see you're still with us :-)", so he may just not update things often.
True, I accidentally posted the date of the comment (1) not the commit. The only thing strange seems to be he used a smiley in the referenced commit message which doesn't seem to be his style.
It's very possibly moreso a cultural issue. COVID caused continuing low attendance, there is currently an anti-education political trend, and AI advancements allow students to be lazy. If parents and peers don't value education, the students won't either.
Beyond references, the point of the literature review is to ensure you have read the literature and understand it well enough to accurately summarize it. If you present a literature review, it's likely assumed you did all of this. So at the very least you should be upfront about how an LLM assisted you.
reply