>It's highly likely that the problem was solved before and the model picked that up.
If you can demonstrate that, I would put it to Strominger and his colleagues, and I imagine they would be obligated to cite your contribution in the peer-reviewed publication.
> If you can demonstrate that, I would put it to Strominger and his colleagues, and I imagine they would be obligated to cite your contribution in the peer-reviewed publication.
There's one little problem: OpenAI isn't actually open and doesn't reveal which dataset they used for training.
Do you understand the purported result, and the verification? I don't, but I'm confident that Andrew Strominger wouldn't have agreed to put his name on this if he didn't think it was correct and interesting.
The human authors have positions at the Institute for Advanced Study (Einstein's old institution), Vanderbilt, Harvard (Strominger) and Cambridge in the UK.
If you have to gauge this by the reputation of the experts involved in it as I do, that seems like a good list to me.
The two main sources for this piece seem to be interviews with Simon Thomas, the Chief Executive of a British semiconductor manufacturer (Paragraf) based in Cambridgeshire; and Raoul Ruparel, a director at Boston Consulting Group, which is based in Massachusetts.
The narrative is that there is a chronic lack of investment in the UK which is preventing promising companies from securing connections to the electrical grid, and that local planning bureaucracy is also inhibiting development. Public services, including transportation, are weak; there is a lack of affordable housing; and these problems are so bad that, following the passage of the CHIPS Act, Paragraf considered setting up in the US.
The article notes that Paragraf was spun out of the University of Cambridge six years ago, and established itself nearby. Cambridge is arguably the best university in Britain for science and technology, so perhaps if Paragraf had set up in the US, it might have chosen a location near a close equivalent, such as MIT or Harvard, both of which are coincidentally situated in Cambridge, Massachusetts — a town that was named in honour of the British university. The two Cambridges have similar populations and similar centuries-old associations with academia, but how do they compare on the criteria discussed in the article?
The average house price in the British Cambridge is currently about £500,000 ($635,000 at the exchage rate used in the article), and the average house price in the American Cambridge is currently in the region of $1,000,000 (£787,000). They are both unaffordable by normal standards, but the American Cambridge is the worst of the two at the moment (or best, if you like high property values).
As regards transportation, both the British and American Cambridge are known for their cycling culture. The British Cambridge has the largest guided busway in the world, which was opened in 2011. Public transport in the American Cambridge is part of one of the oldest mass transit systems in the US, which is run by the Massachusetts Bay Transportation Authority (MBTA). The MBTA has acquired a reputation for financial mismanagement, and the system it controls is amongst the most dangerous in America; in 2022 the Federal Transit Administration announced that it would be intervening at the authority and was 'extremely concerned with the ongoing safety issues' affecting the system [1].
What about connections to the electricity grid in the two locations? Curiously, much of the electrical infrastructure in both Britain and Cambridge in Massachusetts is owned by the same corporation: National Grid PLC. National Grid is a British company, but it happens to have a substantial American business [2], and it owns a lot of the electrical grid in Massachusetts and New York; it is likely that the entity responsible for large parts of Britain's electrical infrastructure also owns the electrical infrastructure that supplies the Boston office building where Raoul Ruparel's company is headquartered. The transatlantic involvement in electrical infrastructure runs both ways: a large part of the electrical grid of Northern England is owned by a subsidiary of Berkshire Hathaway [3].
The issues of planning bureaucracy discussed in the article are complicated, and properly comparing the American and British systems would be a difficult exercise, but from what I know of them they aren't dissimilar; the complaints about the systems are comparable. The article mentions recent measures by the British government to 'impede NIMBY-ism'; Nimbyism is originally an American term (the acronym can only be derived from American English; a 'back yard' in the US would typically be referred to as a 'back garden' in the UK), but it easily gained currency in Britain because the concept and many of the issues underlying it are familiar in both countries. It seems to me that in some respects this issue is a feature rather than a bug; the right of local people and organisations to some say over the fate of their immediate environment is part of the practice of democracy in both Britain and America, and it may be necessary to uphold the system of private property rights that businesses carrying out development themselves depend on.
While I may not agree with your NIMBY-ism comments, the overall picture is undeniably one where they are either trying to get concessions out of the UK government to stay, or trying to sell the concept that they are moving to the US. Presumably the execs see that they could be a lot more profitable in Massachusetts.
This doesn't seem to have been firmly established, but as far as I can tell it's currently thought to be related to how carbon in the solar system's protoplanetary disk behaved when it was vaporized after fusion began in the sun. The question was studied in this paper published in 2021 [1], which is discussed in this article [2].
The authors of the paper suggest that the material that formed the Earth was depleted of carbon early in the solar system's history due to solar activity, and that most of the carbon now on Earth was delivered to the planet later on directly from the interstellar medium.
I should note a couple of clarifications to my first comment: the elemental abundance I mentioned for the universe and Earth's solar system does not include helium and neon, which are abundant, but are usually ignored in this context as they're noble gases.
There is also estimated to be slightly more mass in the present-day universe in the form of iron than nitrogen due to the high mass of iron atoms (nitrogen is the fourth most abundant element by mass in the human body, but the body contains relatively little iron). The number of nitrogen atoms in the universe, however, is substantially higher than the number of iron atoms. The amount of iron in the early universe should also have been lower; the element is formed late in the stellar life cycle [3], whereas the other cosmologically abundant elements that are relevant to biology (carbon, nitrogen and oxygen) are formed earlier [4].
>If generative AI can repeatedly test physics theories faster than humans, then we may witness progress in physics. AI could generate thousands of theories and conduct experiments successfully, possibly leading to new physics models. However, I am uncertain whether this will be achievable soon, particularly for theories requiring costly experiments.
I've long felt that this may be the strongest argument against an AI singularity.
The technical ability to emulate the minds of the world's theoretical physicists and run accelerated simulations of their thought processes may be developed, but the generation of valid new insights in physics might depend strongly on observations and experiments conducted in the physical world, as seems to have been the case historically, and the virtual equivalents of those experiments may prove to be inadequate or impractical to implement.
Steven Pinker made a similar argument in this 2018 discussion with Sam Harris (the remarks begin at 65m03s in this recording [1]; the full conext begins at around 50m36s [2]). Harris is concerned about existential risks posed by advances in artificial intelligence, whereas Pinker is less so, in part for this reason. I agree with Harris that there are risks associated with artificial general intelligence, but I agree with Pinker and the parent comment about the dependence of the scientific process on experiment, and that an inability to conduct accelerated experiments in the physical world may undermine the standard argument about the inevitability of an AI singularity.
An AI capable of interfacing with the physical world might develop the ability to conduct accelerated physical experiments, but it would presumably face the same fundamental and contingent limits as human researchers, and the history of human science suggests those limits may impede exponential progress.
> I asked it to stop adding disclaimers as you can see.
This is the first example of an extended dialogue with GPT-4 that I have read, and the fact that it failed to obey the request to dispense with its repetitive disclaimers was perhaps the most interesting thing to me about it. It seems somehow more fluent to me than GPT-3, as though its verbal IQ has increased a few points, but GPT-3 was already quite articulate; I havent yet seen any examples of clear new abilities from GPT-4.
The substance of the dialogue struck me as generic and lacking novel insight, though the bar was of course set rather high (essentially 'Describe revolutionary new physics'). I've also been jaded by the past few years of advances in AI; if I had seen this transcript ten years ago I would have been surprised and impressed that an AI could have a conversation about theoretical physics, and could demonstrate an ability to discuss relevant concepts in a reasonable and confident manner.
The ability of large language models to exhibit sophisticated verbal reasoning, albeit not yet reliably so, is their most striking feature to me, and I do think that has great scientific potential; perhaps GPT-4 isn't yet a major advance in that respect, but I imagine an important foundation has been laid. I should say I'm grateful to you for publishing this Ramraj; the transcript and the impressions you and others have shared in this thread have been illuminating.
Anyone who is curious about the application of AI to theoretical physics may be interested in the work of the MIT physicist Max Tegmark and his group, which is still at an early stage. Here are some videos in which he discusses AI and physics, in increasing order of detail:
This is a great analogy, I'd never seen it translated into tangible terms like that before.
I remember reading that, at close enough range, the neutrino emissions from a supernova would be intense enough to be dangerous to structures made of ordinary matter, despite the weakness of their interactions, and that they would reach an observer earlier than other forms of radiation due to their ability to escape the collapsing star relatively unimpeded. Neutrinos would be the least of your problems if you were the observer of course.
As I was trying to find a source for this, I discovered there is a unit [1] for the amount of energy released by a supernova called the Foe, which seems apt (it's an acronym derived from 'ten to the power of Fifty-One-Ergs').
Thanks for a great couple of replies. I'd just add that there are almost certainly more superheavy elements not thought to exist in nature which have yet to be produced artificially, but probably will be at some point.
There are definitely unstable superheavy elements that have never yet been produced, or at least detected, but the interesting prediction (widely accepted, but far from proven) is that there are some stable ones.
If you can demonstrate that, I would put it to Strominger and his colleagues, and I imagine they would be obligated to cite your contribution in the peer-reviewed publication.