Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Never-before-seen “black nitrogen” plugs puzzle in periodic table (newatlas.com)
358 points by sahin on June 3, 2020 | hide | past | favorite | 121 comments


> the team exposed nitrogen to extreme heat and pressure. It was pressed together between two diamonds to 1.4 million atmospheres of pressure, and over 4,000 °C (7,232 °F).

> The temperature at the inner core's surface is estimated to be approximately 5700 K (5430 °C or 9806 °F) ... The pressure in the Earth's inner core is slightly higher than it is at the boundary between the outer and inner cores: it ranges from about 330 to 360 gigapascals (3,300,000 to 3,600,000 atm)

https://en.wikipedia.org/wiki/Earth%27s_inner_core

There's probably even weirder states of nitrogen and other elements not that far away from us. Whole chemistries and semiconductor-ish effects we're guessing at. Fun!


This is why I love science, any field where “what happens if we squish it really really hard and then superheat it?” is answered by “let’s find out” is one that has my admiration.

Scientists the world over have my respect for what they do.


"Science isn't about why, it's about why not!"


Indeed. This is way past the 'what happens if you put it in the microwave?' level of scientific enquiry, but yet it also has the same kind of feel to it in some way.


Dumb question, how does one convince nitrogen, a gas, to sit still between the heads of those two diamonds to be compressed?


They employed either a very persuasive graduate student, or a chamber filled with pure nitrogen.

From the experimental method section of the original paper: [1]

A BX90-type diamond anvil cell equipped with 80 μm diameter diamond anvil culets was prepared. A 200 μm thick rhenium foil was indented down to 12 μm and a sample cavity of 40 μm in diameter was laser-drilled at the center of the indentation. Two agglomerates of submicron-sized gold particles, each of approximately 2 μm in size, were loaded into the sample chamber to serve as both YAG laser absorbers and pressure gauges. The cell was then loaded with pure nitrogen gas at ~1200 bars.

[1]: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.12...


Here's what a BX90-type diamond anvil cell looks like...

https://aip.scitation.org/action/showOpenGraphArticleImage?d...


Ah thanks, if I'm understanding that correctly it sounds like the rhenium foil contained the nitrogen as it was being compressed. That makes sense.


> either a very persuasive graduate student

LOL. This was smart and funny.


Dumb question: with all we know about theoretical quantum physics, couldn't we compute what happens in these extreme conditions? Or is it still computationally intractable? If so, why is that?


Only hydrogen has an analytic solution. Even helium requires approximations because the electrons interact. Approximation requires an understanding of the structure of the wavefunction which gets increasingly complex as we add more particles to the system that interact and entangle with one another like electrons in an atom do. Even further, that's for single atoms that aren't interacting strongly with their environment. Here we're talking about an ensemble of nitrogen atoms interacting with each other in a somewhat extreme environment.

https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method


All of the cool computer science problems are in the other sciences. This one in particular looks like a challenging engineering problem to scale up.


According to 'Devs', you just need a few quantum computers and lots of gold tinfoil.


Don't forget the vacuum sealed quantum levitating elevator!


This is one of the things that makes me think we are not living in a computer simulation - the physics is really hard to simulate. Unless the simulation is just fooling us to think that ;)


It may just be rendering what we observe.


I am not a quantum physicist by far, but my understanding is that solving the Schrodinger equation for anything more complex than hydrogen is more or less impossible.

So instead we go the other way: use "close enough" experimental data and perturbation theory to approximate the results for more complex systems.


> solving the Schrodinger equation for anything more complex than hydrogen is more or less impossible.

Solving in the sense of an analytic, closed form solution to the non-relativistic Schrodinger equation (itself an approximation to QFT) using commonly accepted elementary functions. Even this requires actually evaluating the elementary functions to some degree of numerical accuracy if you want digits.

I often see this claim that only hydrogen is solvable, but I find it misleading. We have algorithms that will give solutions to the non-relativistic Schrodinger equation for helium to whatever degree of decimal precision you like (see FCI QMC methods) — how long these algorithms take to run is a different matter (see the fermion sign problem), but for helium it’s not too bad. The algorithms are unbiased, which I consider to be an exact solution.

You can think of it in the same way that we have Monte Carlo solutions to the rendering equation for global illumination. They are not closed form in terms of arbitrary elementary functions, but they converge to the exact solution over time.


I _am_ a computational physicist. The main problem I'm interested in is of a slightly different nature: just compute properties of protons, neutrons, and light nuclei from QCD, the theory of quarks and gluons. However, because the "operating system" of quantum mechanics is the same, many of the computational difficulties are shared.

I described the method we use and some of the computational difficulties in some previous threads

https://news.ycombinator.com/item?id=15780514

https://news.ycombinator.com/item?id=12048170

But, if you're comfortable with an argument from authority, suffice it to say that much of the computational power in the largest supercomputing centers all across the world are dedicated to quantum many-body problems.

If you want equilibrium properties at temperature T [non equilibrium properties, like real-time dynamics have additional exponentially-bad computational intractabilities, known as the sign problem], you want to evaluate a partition function

    Z = tr[ exp( - H / T ) ]
where H is the Hamiltonian that, given an eigenstate, has an the state's energy as the eigenvalue. The space H acts on grows exponentially with the number of particles in your simulation [simulations to determine crystal structures are typically in the canonical ensemble; if that doesn't mean anything to you, that's OK, but it's something you can read further on; basically it means: you try to take the number of particles to be large, rather than, for example, fix a density].

We have importance-sampling Markov Chain Monte Carlo techniques for evaluating Z. For some problems it succeeds with flying colors, although demonstrating that you're in the thermodynamic [infinite number of particles and infinite volume] limit requires calculating at a variety of volumes and particle numbers, for extrapolation. If you just do random updates to your state vector, most updates will be rejected; use HMC to update the whole state at once [HMC = Hybrid Monte Carlo, or Hamiltonian Monte Carlo, if you're not a computational physicist, strangely].

This approach is not naively parallelizable, and the machines where HMC is performed requires extremely high-performance interconnects, think 10-100 times faster than what's available in a standard datacenter.

The method is inherently statistical: give me more computing time and I can shrink the error bars. But if the problem has certain technical difficulties (or is in real-time, for example, rather than equilibrium), you encounter the sign problem, where the partition function you want to evaluate now becomes

    Z = tr[ exp( i H / T ) ]
and you need an exponentially large sample to resolve the intricate cancellations that arise from summing just a bunch of phases.


This went way over my head but just wanted to thank you for taking the time to make this interesting post


Unfortunately the solutions for anything more than a single proton are non-linear. This means finding stable states for anything other than hydrogen is very hard computationally speaking. The problem can be compared to predicting weather patterns a month in advance.


> "the new substance is crystalline, occurs in two-dimensional sheets, and could one day be useful in advanced electronics."

> "the material itself remains unstable and it quickly dissolves when the heat and pressure are relaxed."

So... advanced electronics that can only exist at 1.4 million atmospheres and 4000 degrees C? What are we building, here, computers that only work inside the sun?

By this insane standard, what material would not one day be useful in advanced electronics?


So, when someone mentions nitrogen, high temperatures, and high pressures, in the same sentence, what comes to my mind isn't electronics so much as explosives.


Pure nitrogen forms a dimer and is inert. The nitrogen that goes boom involves hydrogen and oxygen molecules.


Not quite. Pure nitrogen forms an inert dimer with a very strong bond. The nitrogen that goes boom is the energy released by forming such a bond when you liberate nitrogen from complexes with other things than single nitrogen atoms. Like hydrogen and carbon and maybe oxygen.

But this stuff in the article is pure nitrogen, and isn't a dimer, and certainly isn't inert (it decayed back to the dimer when pressure was released). It'd be a crappy explosive (needs to be kept in a diamond anvil cell...) but it would have released the energy used to form it. In theory there are other allotropes, like octaazacubane (N8) that might be more stable and would be frighteningly efficient explosives.


Allotropes aren't "pure" (STP) molecular N2.

This allotrope seems to be the sort of thing that would be (or could be) formed in a sufficiently powerful, or engineered, explosion. How or to what ends ... I'm not sure.


And "going boom" typically involves those nitrogen atoms setting off on their own to turn back into N2.


This makes me wonder, are there are any other elements in the periodic table which seem to be missing their allotropes or was N the only one?


Metallic hydrogen is thought to be possible using the diamond stamping method mentioned in the article.


More on the metallic hydrogen investigations: https://www.nytimes.com/2018/08/16/science/metallic-hydrogen...

It appears that one can use lasers, diamond vises, and/or numerical simulation to get properties of metallic hydrogen.


I have only read a few short articles about it, but after it is compressed to a certain point, can it remain in the metallic state after the vise/laser is removed?


Yeah, they made a shiny thing one time... But it got away


Being previously unfamiliar both with the black phosphorus allotrope, and with newatlas.com, I clicked on the link wondering if this would be the new “red mercury”. I was pleasantly surprised that it is much more interesting (and scientifically valid) than that.


Can anyone more specifically comment on some of the potential applications for this newly discovered material?


None, at least not directly. It's an allotrope which can only exist under extreme physical conditions.

There are many such allotropes of well-known substances. For instance, solid water has over a dozen allotropes (including ice-IX, or "ice-nine", which will not turn you into a human popsicle). Most of the interesting allotropes of ice only form under pressures of >100 MPa, so they are never seen under terrestrial conditions. The same applies to most other allotropes, with a few notable exceptions in metals and metalloids (like tin and sulphur).

It's unclear to me what the author of the article had in mind when they suggested the "potential" that "black nitrogen might have for electronics". Unless the author is imagining electronics which must be placed under immense pressure and heated to thousands of degrees to remain stable?


I was told, that at nanoscale high pressures in the gigapascal range are attainable somewhat easily by a friend of mine working here: https://tu-dresden.de/ing/elektrotechnik/ihm/NEM/die-profess... so maybe this form of nitrogen could be in special nano-islands or something like that. I guess, it is important to understand the physics of the materials to a certain degree and then think about engineering. This research is clearly at the beginning of the first phase, so they maybe found some interesting properties that could be useful in electronics but that's about it.


If it has good conductivity, they probably just write in "potential for electronics" out of habit or reflex. It's probably good for science fiction writers to be able to find this stuff.


There's always the possibility that a method will be found that will allow the material to maintain at least some of these properties after "returning" to normal temp/pressures conditions.


That seems unlikely for a compound which is stable as a gas at STP. You can sometimes stabilize an allotrope of a compound which would otherwise be stable as another allotrope, but there's no real way (that I'm aware of) to stave off a phase transition, especially one as energetic as solid/gas.


My ignorant guess: During a manufacturing process?


This actually bugs me about articles on new discoveries: they almost always try to talk about potential applications, no matter how far-fetched they are. Sometimes the most honest answer is that we currently don't know any potential applications.

In this case, we have something that may be similar to graphene, but can only exist at 4000°C and 1.4 million atmospheres. Graphene is interesting for electronics, so they suggest black nitrogen might be useful for electronics too. But of course graphene has the slight advantage that it's relatively easy to produce and doesn't dissolve at temperatures lower than the surface of a star.


The significance of this article is that you don't need a big science lab like CERN to do useful experiments.

Diamond anvil cells are under $10,000:

https://diamondanvils.com/product-category/diamond-anvil-cel...


> you don't need a big science lab like CERN to do useful experiments

Did anyone really ever claim that?


No, but that's still pretty cool to hear (even though everything else to conduct the experiment was probably a bit pricier if not just as pricey)


While your first statement is true, in this specific case, you actually need a "big science lab":

* lasers used to heat up the sample (at least a few 10k$), * the actual X-ray source (i.e. a synchrotron, which costs hundreds of millions to build and millions annually to run, albeit for thousands of experiments a year), * surrounding instruments (such as X-ray detector for diffraction and spectrometer+camera for temperature measurements) * people who actually makes things work

This kind of experiment is far from being cheap technically speaking. However, access to beamtime at such state-run X-ray facilities (CERN job's is not to produce X-rays) is often free of charge, at least for academia (or as long as you publish your results). You "only" have to apply for beamtime.


And how cheaply can you create a environment of "1.4 million atmospheres of pressure, and over 4,000 °C"?


A diamond anvil makes both of those on the tips of the diamond (briefly)


Diamond anvil cells produce the pressure, however, here, the temperature is created by lasers. If you are careful enough (and reasonable enough), your diamond anvil cell can last forever, as long as you don't break the diamonds you can reuse the same cell for another sample or another run.

The pressure usually requires a pressure driver to push the diamonds towards each other (some kind of very precise gas pump) which runs of of electricity.

The temperature however evolves rapidly: the sample is heated via laser (usually pulsed to observe dynamics but can also be considered continuous, i.e. above millisecond time scale).


so you don't know what CERN is? ok


Sounds like a new weapon for the next Star Trek or James Bond movie.


What are some other open mysteries in physics / chemistry / science? Anyone have some favorites?

Bonus points for simplicity + obscurity. Everyone knows that dark matter is an open mystery, but I suspect not too many people knew about black nitrogen.

EDIT: There are some nice lists on Wikipedia that are fun to dig through: https://en.wikipedia.org/wiki/Lists_of_unsolved_problems

Is Feynmanium the last chemical element that can physically exist? That is, what are the chemical consequences of having an element with an atomic number above 137, whose 1s electrons must travel faster than the speed of light? https://en.wikipedia.org/wiki/Extended_periodic_table#Elemen...


How do glasses form? What order parameter determines the properties of liquids? Are there an unifying properties determining the structure of atomic liquids? What determines the network properties of liquids?

Here's a more tangible one; Is solid hydrogen stable?

I did my PhD in this area and even discovered liquid polymorphism in atomic Cerium.

There are tons of unanswered questions like this! It's bonkers how many things we know we don't know, just think of the things we don't know we don't know :p


There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know. - Rumsfeld


I've never understood why people mock him for saying this. It frequently shows up on lists of stupid political quotes/"Bushisms". But not only is it absolutely true and important for planners to understand, it's stated clearly and concisely. Do most people just look at it and see word salad? I don't get it.


I actually like the quote. But there are a couple reasons Rumsfeld deserves to get raked over the coals for it:

1. The Bush Administration lied like war criminals to lead us into the Iraq War. So even though he may have been referring to technical operational complications of winning the peace in Iraq or whatever, it was easy to construe it as more bullshit gibberish to justify a crime against humanity.

2. Related to 1. There were plenty of known knowns with Iraq that the Bush Administration treated as some variety of unknown. It sounded a lot like he was using a clever philosophical aphorism to make excuses for stupid blatant errors of judgment.

An NPR recap of the path to war at a time when the full scope of the fraud was still coming to light:

https://www.npr.org/templates/story/story.php?storyId=528939...


Because Rumsfeld was constantly getting philosophical in briefings about “what is a war? Clausewitz said…” when his job was to win an actual, concrete war in Iraq, and we had no idea who the hell were setting the IEDs and kidnapping foreigners. He was too far up his own ass to notice he was losing.


Well he is a Princeton man.


He was winning, the war machine was rolling and fresh contracts signed, in blood.


It might be interesting to see a list of "often-mocked sayings that have a point".

Two that I know of involving politicians and the Internet are

https://en.wikipedia.org/wiki/Al_Gore_and_information_techno...

https://en.wikipedia.org/wiki/Series_of_tubes

I'm sorry to say that I've personally made fun of both of these people for these soundbites in a way that I now regret.


Another is Bill Clinton's "It depends on what your definition of 'is' is", where the point was that it could either refer to "currently" or "on an ongoing basis".


An interesting case in which African-American Vernacular English would have been more expressive, as AAVE draws the distinction between "is" as in currently and "be" as in habitually. Clinton, as the first black president, should have been better able to express himself.

https://en.wikipedia.org/wiki/Habitual_be


There's a sentence in the article that is a bit misleading (or at least unclear from the phrasing):

"AAVE speakers use 'be' to mark a habitual grammatical aspect not explicitly distinguished in Standard English."

Standard American English does have explicit ways to mark habitual aspect[0], however it lacks one for the present tense (which is how habitual "be" is used in AAVE). Personally, I think Standard American English would benefit from adopting it.

As an aside, I've never understood that joke about Clinton. Was it a reference to something in particular?

[0] For example there is "used to" for the past tense, "would" for the past tense, "will" for an unspecified time, and a combined form with the present progressive:

I used to go to the beach every day.

I would often drink while lounging in the sun and get sunburnt.

I will make the same mistake again I'm sure.

Every time I do, I'm increasing my risk of skin cancer.


> As an aside, I've never understood that joke about Clinton. Was it a reference to something in particular?

It's a reference to his defense of saying he wasn't lying with the phrase "there's nothing going on between us" when questioned about his relationship with Monica Lewinsky.

He explained that this was true because they were no longer seeing each other at the time of questioning, but the way he said it was striking. "It depends on what the meaning of the word 'is' is. If the—if he—if 'is' means is and never has been, that is not—that is one thing. If it means there is none, that was a completely true statement"

I think it's easy to see why people would latch on to that and make jokes about it.


I meant the joke/comment about Clinton being the first black president. But I saw someone else link this, which seems to be the source:

https://www.theatlantic.com/notes/2015/08/toni-morrison-wasn...


> Clinton, as the first black president, should have been better able to express himself.

Clinton? Or you intended to write Obama?


It depends what your definition of 'black' is: https://www.theatlantic.com/notes/2015/08/toni-morrison-wasn...


Clinton has been called "The first black president" before: http://ontology.buffalo.edu/smith/clinton/morrison.html


It seems a bit charitable to say that the series of tubes has a point? I'm sure you can also attach a point to a random set set of words.

The context makes it even more damning.


What's wrong with the analogy? Tubes each have a limited capacity, so do the various ways I connect to the net. That capacity can fill. Tubes connect and their contents routed similar to the contents of internet traffic. If you drew a diagram of water tubes and a diagram of internet connections they'd both resemble each other.

I'm still not sure why this is a bad analogy even in context.


In the context of the speaker's insane quote, in which he blamed streaming video for a 4-day delay of his personal emails, it doesn't make much sense. Also it's a rhetorical fallacy to suggest that because a thing is not another thing, then it must be a third thing. Really, the Internet is neither a truck nor a series of tubes.


Right. He's making an analogy about how things wait in line, and he's talking about for significant amounts of time, not fractions of a second. That's not how the internet works at all. Competing data flows slow down a bit but all run simultaneously.

We don't even have to get into the silly parts like "the people who are streaming through 10, 12 movies at a time or a whole book at a time"


Internet is a bidirectional graph with constrained bandwith (flow). Series of tubes is a perfectly fine analogy.

His thesis (about video streaming) was false, but his analogy was ok, people jumped on it for the wrong reasons (as it often happens in politics).


It's not hard to make an analogy work if you want to simplify the subject. Is a series of tubes a better analogy than trucks on a road network?


Ok, that's a good analogy too (packets being a thing).


"A series of tubes" was meant to be a contrast to the truck analogy.

"the Internet is not something that you just dump something on. It's not a big truck. It's a series of tubes. And if you don't understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it's going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material."

The tube vs truck analogy doesn't really make much sense here. Tubes aren't exactly famous for having discrete items queue up within them.


Did Gore actually say that he invented the internet? The linked Wikipedia article says that it's an urban legend.


The quote is "I took the initiative in creating the Internet" which is often misinterpreted as Al Gore saying he "created the internet" which I doubt was his actual intention. At a political/governmental level, he was actually quite instrumental in legitimizing and evangelizing the importance of network communications as well as funding programs that were necessary for its creation. He probably should have phrased it as "I took the initiative in the creation of the internet" or that he was "instrumental in the creation of the American internet".


While I don't have the exact quote, it's something more along the lines of "my actions caused the creation of the Internet" or "the Internet wouldn't be here if it weren't for me." There's actually not one quote since it was multiple instances that he took credit for the Internet (as we know it today). And not wrongly, really: he sponsored bills which provided federal funding for the Internet backbone system and the broadband consumer transport networks that define the Internet we use today.

But of course it was all too easy for some people to claim that he was taking credit for TCP/IP and HTTP.


No, at the time it was mocked but over time people's attitudes changed. It even has a Wikipedia page. ("While the remarks initially led to some ridicule towards the Bush administration in general and Rumsfeld in particular, the consensus regarding it has shifted over the years, and it now enjoys some level of respect.") https://en.wikipedia.org/wiki/There_are_known_knowns


I think the problem is less with the statement itself, and more to do with him using it to help justify the Iraq war, where some of the "known knowns" and "unknown unknowns" were actually speculation, exaggerated, or even made up completely.


Most people just look at it and see word salad. He is in fact stating, correctly if not clearly for a general audience, a fundamental lesson of project planning. But the whole "known-unknown / unknown-unknown" thing is way above most people's heads.


It’s not stated as clearly or precisely as I think it could be. I’ve heard it articulated better without giving each epistemic category its own jargon like “unknown unknowns”.

There are things you know, and things you don’t know. Furthermore there are things you know you don’t know. But be aware that there are things you don’t even know you don’t know.


Perhaps. I suppose that comes down to matter of taste. But either way, I certainly don't think his formulation sinks to level of mockably bad. Especially since it was spoken off the cuff in response to a question from the press.


It's easier to mock someone for the strange sounding words comming out of their mouth, rather then try to understand the meaning.


The saying long predates Rumsfeld's 2002 usage:

https://en.m.wikipedia.org/wiki/There_are_known_knowns

I'd first encountered the idea of "unknown unknowns", or unk-unk, in a 1982 book, Forced Options, p. 10:

https://archive.org/details/forcedoptionssoc0000shin_c8x7/pa...

The notion dates at least as far back as 1955.


I'm pretty sure Rumsfeld was paraphrasing J.B.S Haldane from a ca. 1927 publication, or Eddington's paraphrase of Haldane's observation:

"The Universe is not only queerer than we suppose, but queerer than we can suppose."

Rumsfeld never claimed it to be original with him.


You should check Slavoj Zizek's take on this quote, where he completes the quadrant with the "unknown knowns", i.e. stuff that we know but we don't know that we know it, as in subconscious biases, ideology, etc.


I think that "tacit knowledge" is close to the idea of "unknown knowns."

Another interesting quadrant model is the Johari Window which has to do with what one knows about oneself and what others know about oneself.

https://www.communicationtheory.org/the-johari-window-model/


So this is actually official terminology used by Project Management Institute in the PMBOK and refers to special budgets set aside for risk contigency :

    6.4.2.6 DATA ANALYSIS
    
    Reserve analysis
    Reserve analysis is used to determine the amount of contingency and management reserve needed for the project. Duration estimates may include contingency reserves, sometimes referred to as schedule reserves, to account for schedule uncertainty. Contingency reserves are the estimated duration within the schedule baseline, which is allocated for identified risks that are accepted. Contingency reserves are associated with the known-unknowns, which may be estimated to account for this unknown amount of rework. The contingency reserve may be a percentage of the estimated activity duration or a fixed number of work periods. Contingency reserves may be separated from the individual activities and aggregated. As more precise information about the project becomes available, the contingency reserve may be used, reduced, or eliminated. Contingency should be clearly identified in the schedule documentation.
    
    Estimates may also be produced for the amount of management reserve of schedule for the project. Management reserves are a specified amount of the project budget withheld for management control purposes and are reserved for unforeseen work that is within scope of the project. Management reserves are intended to address the unknown-unknowns that can affect a project. Management reserve is not included in the schedule baseline, but it is part of the overall project duration requirements. Depending on contract terms, use of management reserves may require a change to the schedule baseline.


> How do glasses form?

Reminds me of cool paper (i think) from one folks I worked with describes in the abstract[0]:

"We describe numerical simulations and analyses of a quasi-one-dimensional (Q1D) model of glassy dynamics. In this model, hard rods undergo Brownian dynamics through a series of narrow channels connected by J intersections. We do not allow the rods to turn at the intersections, and thus there is a single, continuous route through the system. This Q1D model displays caging behavior, collective particle rearrangements, and rapid growth of the structural relaxation time, which are also found in supercooled liquids and glasses. The mean-square displacement Σ(t) for this Q1D model displays several dynamical regimes: 1) short-time diffusion Σ(t)∼t, 2) a plateau in the mean-square displacement caused by caging behavior, 3) single-file diffusion characterized by anomalous scaling Σ(t)∼t0.5 at intermediate times, and 4) a crossover to long-time diffusion Σ(t)∼t for times t that grow with the complexity of the circuit. We develop a general procedure for determining the structural relaxation time tD, beyond which the system undergoes long-time diffusion, as a function of the packing fraction ϕ and system topology. This procedure involves several steps: 1) define a set of distinct microstates in configuration space of the system, 2) construct a directed network of microstates and transitions between them, 3) identify minimal, closed loops in the network that give rise to structural relaxation, 4) determine the frequencies of `bottleneck' microstates that control the slow dynamics and time required to transition out of them, and 5) use the microstate frequencies and lifetimes to deduce tD(ϕ). We find that tD obeys power-law scaling, tD∼(ϕ∗−ϕ)−α, where both ϕ∗ (signaling complete kinetic arrest) and α>0 depend on the system topology. "

But yeah, so much amazing things out there, I could read and think about stuff forever lol

[0] https://arxiv.org/abs/1401.0960


Nice find. The Axis of Evil is particularly interesting.

https://en.wikipedia.org/wiki/Axis_of_evil_(cosmology)


Woah, what a rabbit hole that wiki article got me into. Btw, what could be the possible explanation for that "axis of evil" if we assume it's not just a series of faulty observations?


Ha, I'll leave this for a Saturday morning coffee + wikipedia binge then. Haven't done this for ages!


The search for a non-toxic, non-animal-based chemically stable red colour.


People who aren't vegetarians, fans of Berton Roueché, or both, might not know the origin of https://en.wikipedia.org/wiki/Carmine!


Exactly how charge is separated in a cloud to cause lightning.


“The dirty secret of supernovae is that in the computer models, we can’t ever actually get them to do the final ignition. There always has to be an injected trigger,” says Ashley Pagnotta at College of Charleston in South Carolina.

https://www.newscientist.com/article/2226326-black-holes-for...

With regard to the superluminal electrons above atomic number 137, it is apparently only a consequence of using a non-relativistic approximation. That apparently leaves the somewhat related issue of an imaginary ground state energy intact, though at a higher atomic number. Nice find!


> Is Feynmanium the last chemical element that can physically exist? That is, what are the chemical consequences of having an element with an atomic number above 137, whose 1s electrons must travel faster than the speed of light?

Just a nitpick, electrons do not travel - they simply exist as a quantum phenomena, with a probability of existing in certain places. That said, as your link also explains, when we take the electron energy into consideration, it seems like there is a limit for "regular" atoms, but beyond that they may simply exhibit weird quantum mechanics. We have no idea.


Of course they travel, or at least they have a wave function in momentum space which makes it possible to calculate the probability that we will find it at a certain point further away at a later time.

And we have very good ideas. Quantum Mechanics is not something mysteriously handwaving mumbo jumbo. It is used every day to make very precise predictions.


I was watching a TED talk on the fourth phase of water [0] by Gerald Pollack where he said that we don't know why clouds form the way they do. Given the popular theories, the vapour particles should distribute evenly across the sky. In part, the talk is about how little we actually know about water.

[0] https://www.youtube.com/watch?v=i-T7tCMUDXU


Too lazy to find references but I have seen talk where people melt materials, (the heat them far above glass transition state), then cool them and they get the exact same configuration down to the molecular level, as if the were never liquid.


Whether the island o stability exists: https://en.m.wikipedia.org/wiki/Island_of_stability


Those links are excellent reading, thank you for posting them here!



Why are some materials able to conduct heat or electricity but not the other?


literally every person’s phd project contains mysteries ..


The main one being how late they left it to begin writing and how they managed to finish it with so little sleep.


> To create the new form [nitrogen] was pressed together between two diamonds to 1.4 million atmospheres of pressure, and over 4,000 °C (7,232 °F)... It appears to have good conductivity, much like that of graphene, which could make it useful in future electronic devices.

Are you kidding me? Kinda grasping at straws if that's the application -- who are we selling these electronics to, magma golems? This is good & pure science, please don't sully it with such foolishness.


Everyone knows you can't report on science without promising cellphones that you only have to charge once per week.


So, basically, Nokia 3310?


What's even more depressing is that most people on this forum will probably not get this reference :(


Science journalists are pretty much required to ask about useful applications. "It was cool and we wanted to do it" may be the truth, but (sadly) it's not an acceptable answer.


When did "for science" loose it's practical meaning and become a meme? At this point, "for science" is next to "hold my beer".


Probably with the release of Portal.


Totally agree.

Scientific breakthroughs shouldn't require commercial justification. That they are expanding the extent of human knowledge should be enough.


If it can be done with pressure alone, it isn't unreasonable to consider that a crystalline structure might be able to keep it in this state with no external forces.


> magma golems

I don't understand, why not? If they're magma, it doesn't mean they're not smart enough to use electronics.


> could one day be useful in advanced electronics

We're still waiting for graphene.


I love the sense of humor found in the comments of this thread.


The name reminds me of Big Bismuth!


'Tis but a flesh wound!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: