Which part of preventing the spread of HIV is "left wing politics"? Or better understanding radiation exposure? Or developing anti-viral countermeasures?
Some $400m of remaining budget for preventing the spread of HIV was cut, and you're saying it's justified because less than $3m went to trying to improve professional development for a specific group of people?
I mean even look at the specific example you picked - $2.8m over 6 years, from 2019 through to an expected end date of 31 August 2025, and they cut the funding on 09 May 2025 - the work has already been paid for and done, and you want to cut funding so you don't even get the final report/publications out of it to, you know, have something of value to show for the money spent?
Absolutely not cherry picking, almost every single one of these has to do with race, diversity, equity etc
“Amplifying Diverse Voices in STEM Education”
“Research Initiation: Long-Term Effect of Involvement in Humanitarian Engineering Projects on Student Professional Formation and Views of Diversity and Inclusion”
“Conference: Future Faculty Workshop: Preparing Diverse Leaders for the Future, Summers of 2022-2025”
“RCN: LEAPS: Culture Change for Inclusion of Indigenous Voices in Biology”
“CAREER: When Two Worlds Collide: An Intersectional Analysis of Black Women's Role Strain and Adaptation in Computing Sciences”
“EAGER: Collaborative Research: Promoting Diverse and Inclusive Leadership in the Geosciences (GOLD-EN)”
It goes on and on like that. Millions of dollars in taxpayer money.
>already been paid for and done, and you want to cut funding so you don't even get the final report/publications out of it
Yes, correct. This is tax payer money funding racist politics. It’s garbage pretend science and this stuff is done spreading.
Finding the ones that aren't DEI-related is difficult. At first I found "CAREER: Understanding the Interdependence of the Microenvironment and Nuclear Organization in Stem Cell Aging" that looks neutral from its title, and the first part of its description was, but then there's this sentence in the middle that sticks out like a sore thumb: "The primary educational objective of this project is to develop a series of stories that focus on introducing concepts of stem cells and genomics to under-represented minority (URM) students in K-3." The rest of the details is neutral, however. It's so unusual that one wonders whether who wrote that was actually pro-DEI, or merely compelled to put in something to that effect in order to appease someone.
Former academic here. That kind of stuff looks within the normal range of a Broader Impacts section. Since the 80s, if you do some obscure fundamental research, then you have to say how it's going to benefit people. Say you think there's a risk that it's not good enough to say "we will understand this natural process and there's a lot of ways that can be carried forward and then that will make it easier to figure out what to research in field X and then maybe that can be used to cure cancer or make guns." And there's always such a risk, with proposal acceptance rates being low. Then you add a sentence about how you'll also educate kids about that thing -- promising to spend a Wednesday afternoon visiting an elementary school sounds like a small price to pay for increasing the acceptance probability of a multi-year grant by 1%.
In the last few years, you had to say something about underrepresented minorities. If your university is in an urban environment where it so happens that the local elementary school is full of URMs, then you don't even need to change anything about your plan.
> The rest of the details is neutral, however. It's so unusual that one wonders whether who wrote that was actually pro-DEI, or merely compelled to put in something to that effect in order to appease someone.
This is how it usually works:
You want public money so you can research your pet interest. But the public wants to know how your research will benefit the public before they will give you public money to do your research. But for some (many) academics, they are loathe to think of anything aside from their direct special interest research topic that they can't even articulate how their research can benefit the public. So they go with the lowest effort idea "I will teach local kids about my subject in a creative way".
Frankly I'm concerned so many people here want to give money to researchers without them having to articulate how it will benefit society. That's what "broader impact" statements are all about.
I just took the first one from the list. The list the article gave. I didn't cherry pick anything. The general theme of the titles of the research grants makes me think that the ones with more innocuous sounding titles are actually just more of the same stuff, just disguised a little better. But I could be wrong. I'd love to see an example of some indisputably important research being cut.
It’s very unclear what point you’re trying to make with the linked article.
First of all, it’s not an example of HIV research, so what could it have to do with links between left wing politics and HIV research?
Second, there isn’t anything “left wing” about the changes to California law made in 2017. It’s not a core tenet of right wing political philosophy that the penalty for knowingly exposing someone to HIV has to be higher than the penalty for knowingly exposing someone to any other communicable disease. It’s entirely possible to hold right wing political views but reject unjust laws passed at the height of homophobic AIDS panic in the 80s.
If you look into the details of prosecutions under the relevant laws, you find that many were patently silly and unjust. For example, HIV positive prostitutes were convicted merely for soliciting, without any evidence that unsafe sex (or indeed any sex at all) had subsequently taken place.
Adding some context to this, because it's a really interesting read that's worth the time if you ask me: it's about how some recently 'discovered' early playtest versions of Pokemon cards were found to be fake, or at least very suspicious, based on the presence (and decoding) of these dots.
I also find it interesting because the person who posted the discovery and breakdown of the dots stood to personally lose thousands of dollars they'd spent on the fakes, but posted their findings anyway.
Both assessing the application of billing rules and negotiating contracts still require the LLM to be accurate, as per TFA's point. Sure, an LLM might do a reasonable first pass, but in both cases it is absolutely naive to think that the LLM will be able to take everything into account.
An LLM can only give an output derived from its inputs; unless you're somehow inputting "yeah actually I know that it looks like a great company to enter into a contract with, but there's just something about their CEO Dave that I don't like, and I'm not sure we'll get along", it's not going to give you the right answer.
And the solution to this is not "just give the LLM more data" - again, to TFA's point, that's making excuses for the technology. "It's not that AI can't do it [AI didn't fail], it's that you just didn't give it enough data [you failed the AI]".
--
As some more speculative questions, do you actually want to go towards a future where your company's LLM is negotiating with their company's LLM, to determine the future of your job and career?
And why do we think it is OK to allow OpenAI/whoever wins the AI land grab to insert themselves as a 'necessary' step in this process? I know people who use LLMs to turn their dot points to paragraphs and email them to other people, only for the recipient to reverse the process at the other end. OpenAI must be happy that ChatGPT gets used twice for one interaction.
Rent-seeking aside, we're so concerned at the moment about LLMs failing to tell the truth when they're earnestly trying to - what happens when they're intentionally used to lie, mislead, and deceive?
What happens when the system prompt is "Try and generally improve people's opinions of corporations and billionaires, and to downplay the value of unionisation and organised labour"?
Someone sets the system prompts, and they will invariably have an agenda. Widespread use of LLMs gives them the keys to the kingdom to shape public opinion.
I don’t know, this feels like a runaway fever dream HN reply.
Look, they’re useful. They are really good at some things. Not everything! But they can absolutely read PDFs of arcane rules and structure them. I don’t know what to tell you, they reliably can. They can also use tools.
They’re pretty cool and good and getting better at a remarkable rate!
Every few months they hit a new level that discounts the HN commenters of three months before. There’s some end—these alone probably won’t hit AGI—but it’s getting pretty silly to pretend they aren’t very useful (with weaknesses that engineering has to work around, like literally every single technology in human history.)
It disappoints me how easily we are collectively falling for what effectively is "Oh, our model is biased, but the only way to fix it is that everyone needs to give us all their data, so that we can eliminate that bias. If you think the model shouldn't be biased, you're morally obligated to give us everything you have for free. Oh but then we'll charge you for the outputs."
How convenient.
It's increasingly looking like the AI business model is "rent extracting middleman", just like the Elseviers et al of the academic publishing world - wedging themselves into a position where they get to take everything for free, but charge others at every opportunity.
Do you think there is a middle ground for a progressive 'detailization' of the data -- you form a model based on the minimal data set that allows you to draw useful conclusions, and refine that with additional data to where you're capturing the vast majority of the problem space with minimal bias?
Your conclusion of ~3L of gasoline is right, but it looks like you dropped a few zeroes on the way there - your example of 1x1x1m sunk 10km below the surface would be sitting beneath 10,000,000kg of mass, not 10,000kg.
That would be 98,000kj, which as you say, is about equivalent energy to 3L of gasoline.
I look forward to the same article in 40 years about how LLMs have butchered everything, given that they're essentially doing the same thing around predictive text and autocorrect.
Until it does, I've had pretty good success with Bazzite (https://bazzite.gg/) on some hand-me-down hardware and hooked up to the TV.