Vapor chambers and heat pipes are exceptional at moving heat from a relatively small part of them to distribute it around the whole.
This is because the heat dumps into a liquid which is concentrated at the heat source, but as soon as it evaporates it fills the volume in which it’s contained. Also it’s because the evaporation process itself sucks out heat at a rate that’s orders of magnitude faster than via conduction, convection, or radiation alone.
They do not cool though. They rely on the fact that the heat source is relatively small and that something else can pull heat out of them fast enough to re-condense the liquid (and since they distribute the heat that cooling can attach anywhere or everywhere — this is like an embarrassingly parallel problem in software).
In a battery pack there is a lot of surface area that gets relatively and evenly hot, and little room to extract it between the cells. This would likely result in even heating around the heat pipe, which would tend to evaporate all of the liquid inside and do nothing but raise its overall temperature after an initial delay.
What it potentially could be used for is to draw heat out of the battery pack and up to some place where better airflow were possible, or for some active cooling system to extract the heat, but there are problems with scaling up like that (in typical heat pipes they manufacture wicking inner layers to draw the water back even against gravity).
At the points they could be used it’s likely significantly cheaper and easier to control some air or liquid cooling loop through the batteries.
Phones are ideal because a tiny little chip is producing almost all of the heat. It’s not even a lot, it’s just in a small area. Temperature goes up when heat can’t escape, so in this case, spreading the heat around even a few square inches can be a major factor in keeping down the temperature of the CPU/GPU.
Contrast this to EV batteries which are huge and produce evenly distributed heat already; there just isn’t the same value when things are big enough to add cooling systems, when cooling systems are necessary anyway, and when the heat pipe or vapor chamber just adds another piece too the system.
When a process crashes, its supervisor restarts it according to some policy. These specify whether to restart the sibling process in their startup order or to only restart the crashed process.
But a supervisor also sets limits, like “10 restarts in a timespan of 1 second.” Once the limits are reached, the supervisor crashes. Supervisors have supervisors.
In this scenario the fault cascades upward through the system, triggering more broad restarts and state-reinitializations until the top-level supervisor crashes and takes the entire system down with it.
An example might bee losing a connection to the database. It’s not an expected fault to fail while querying it, so you let it crash. That kills the web request, but then the web server ends up crashing too because too many requests failed, then a task runner fails for similar reasons. The logger is still reporting all this because it’s a separate process tree, and the top-level app supervisor ends up restarting the entire thing. It shuts everything off, tries to restart the database connection, and if that works everything will continue, but if not, the system crashes completely.
Expected faults are not part of “let it crash.” E.g. if a user supplies a bad file path or network resource. The distinction is subjective and based around the expectations of the given app. Failure to read some asset included in the distribution is both unlikely and unrecoverable, so “let it crash” allows the code to be simpler in the happy path without giving up fault handling or burying errors deeper into the app or data.
The other comment explains this, but I think it can also be viewed differently.
It’s helpful to recognize that the inner script tags are not actual script tags. Yes, once entering a script element, the browser switches parsers and wants to skip everything until a closing script tag appears. The STYLE element, TITLE, TEXTAREA, and a few others do this. Once they chop up the HTML like this they send the contents to the separate inner parser (in this case, the JS engine). SCRIPT is unique due to the legacy behavior^1.
HTML5 specifies these “inner” tags as transitions into escape modes. The entire goal is to allow JavaScript to contain the string “</script>” without it leaking to the outer parser. The early pattern of hiding inside an HTML comment is what determined the escaping mechanism rather than making some special syntax (which today does exist as noted in the post).
The opening script tag inside the comment is actually what triggers the escaping mode, and so it’s less an HTML tag and more some kind of pseudo JS syntax. The inner closing tag is therefore the escaped string value and simultaneously closes the escaped mode.
Consider the use of double quotes inside a string. We have to close the outer quote, but if the inner quote is escaped like `\”` then we don’t have to close it — it’s merely data and not syntax.
There is only one level of nesting, and eight opening tags would still be “closed” by the single closing tag.
^1: (edit) This is one reason HTML and XML (XHTML) are incompatible. The content of SCRIPT and STYLE elements are essentially just bytes. In XML they must be well-formed markup. XML parsers cannot parse HTML.
Whoever the idiot was who came up with piling inline CSS and JS into the already heavy SGML syntax of HTML should've considered his career choices. It would've be perfectly adequate to require script and CSS to be put into external "resources" linked via src/href, especially since the spec proposals operated under the assumption there would be multiple script and styling languages going forward (like, hey, if we have one markup and styling language, why not have two or multiple?). When in fact the rules were quite simple: in SGML, text rendered to the reader goes into content, everything else, including formatting properties, goes into atttibutes. The reason for introducing this inlining misfeature was probably the desire to avoid network roundtrip, which would've later been made bogusly obsolete by Google's withdrawn HTTP/2 push spec, but also the bizarre idea anyone except webdev bloggers would be editing HTML+CSS by hand. To think there was a committee overviewing such blunders as "W3C recommendations" - actually, they screwed up again with CSS when they allowed unencoded inline data URLs such as used for SVG backgrounds and the like. The alarm bells should've been ringing at the latest the moment they seriously considered storing markup within CSS like with the abovementioned misfeature but also with the "content:" CSS property. You know, as in "recommendation" which is how W3C final stage specs were called.
Not just his convenience. Man-millenia of convenience, if you will ;) I too love the fact that many things can be single index.html's, no need of a zip file then. It's double-click to view. One of the best things about the web platform.
Edit: and "effort", please. The spec has a simple and clear note:
> The easiest and safest way to avoid the rather strange restrictions described in this section is to always escape an ASCII case-insensitive match for "<!--" as "\x3C!--", "<script" as "\x3Cscript", and "</script" as "\x3C/script" when these sequences appear in literals in scripts (e.g. in strings, regular expressions, or comments), and to avoid writing code that uses such constructs in expressions. Doing so avoids the pitfalls that the restrictions in this section are prone to triggering.
Backwards compatibility is easily and completely worth this small amount of effort. It's a one-liner in most languages.
The easiest and safest way to avoid the rather strange restrictions described is to not make use of inline script in a way that makes those restrictions neccessary, though. And a "recommendation" should reflect that (from back when HTML recommendations were actually published rather than random Google shills writing whatever on github). The suggested workaround is also not without criticism (eg [1]).
> It would've be perfectly adequate to require script and CSS to be put into external "resources" linked via src/href
Bullshit - Navigator and IE didn't have HTTP/2. I'm guessing you didn't use dialup where your external CSS or JavaScript regularly failed to load. You didn't add extra dependencies because IE would only had two concurrent connections to load files.
It's easy to criticize past mistakes from your armchair: but I suggest you try and be a little more fair towards the people that made decisions especially when overall HTML has been a resounding success.
Huh, it’s still confusing to me why they would have this double-escaping behavior only inside an HTML comment. Why not have it always behave one way or the other? At what point did the parsing behavior inside and outside HTML comments split and why?
At some point I think I read a more complete justification, but I can’t find it now. There is evidence that it came about as a byproduct of the interaction of the HTML parser and JS parsers in early browsers.
In this link we can see the expectation that the HTML comment surrounds a call to document.write() which inserts a new SCRIPT element. The tags are balanced.
In this HTML 4.01 spec, it’s noted to use HTML comments to hide the script contents from render, which is where we start to get the notion of using these to hide markup from display.
My guess is that at some point the parsers looked for balanced tags, as evidenced in the note in the last link above, but then practical issues with improperly-generated scripts led to the idea that a single SCRIPT closing tag ends the escaping. Maybe people were attempting to concatenate script contents wrong and getting stacks of opening tags that were never closed. I don’t know, but I suppose it’s recorded somewhere.
Many things in today’s HTML arose because of widespread issues with how people generated the content. The same is true of XML and XHTML by the way. Early XML mailing lists were full of people parsing XML with naive PERL regular expressions and suggesting that when someone wants to “fix” broken markup, that they do it with string-based find-and-replace.
The main difference is that the HTML spec went in the direction of saying, _if we can agree how to handle these errors then in the face of some errors we can display some content_ and we can all do it in the same way. XML is worse in some regards: certain kinds of errors are still ambiguous and up to the parser to determine how to handle, whether they are non-recoverable or recoverable. For those non-recoverable, the presence of a single error destroys the entire document, like being refused a withdrawal at the bank because you didn’t cross a 7.
At least with HTML5, it’s agreed upon what to do when errors are present and all parsers can produce the same output document; XML parsers routinely handle malformed content and do so in different ways (though most at least provide or default to a strict mode). It’s better than the early web, but not that much better.
Discussing why parsing HTML SCRIPT elements is so complicated, the history of why it became the way it is, and how to safely and securely embed JSON content inside of a SCRIPT element today.
This was my first submission, and the above comment was what I added to the text box. It wasn’t clear to me what the purpose was, but it seemed like it would want an excerpt. I only discovered after submitting that it created this comment.
I guess people just generally don’t add those?
Still, to help me out, could someone clarify why this was down-voted? I don’t want to mess up again if I did, but I don't understand what that was.
> Leave url blank to submit a question for discussion. If there is no url, text will appear at the top of the thread. If there is a url, text is optional.
Most people will opt for text to be optional with a link - unless they're showing their own product (Show HN). Because there is an expectation that you will attempt to read an article, before conversing about it.
I think its just because as a comment it looks pretty random and somewhat off topic since its a summary of the article instead of an opinion on it.
I think most of the time people dont add a comment to submissions, but if they do its more of the form: I found X interesting because of [insert non obvious reason why X is interesting] or some additional non-obvious context needed.
In any case, i don't think there is any reason to worry too much. There was no ill intent and at the end of the day its all just fake internet points.
You can interactively rebase after the fact while preserving merges even. I usually work these days on stacked branches, and instead of using —-fixup I just commit with a dumb message.
git rebase -i --rebase-merges --keep-base trunk
This lets me reorganize commits, edit commit messages, split work into new branches, etc… When I add --update-refs into the mix it lets me do what I read are the biggest workflow improvements from jj, except it’s just git.
This article from Andrew Lock on stacked branches in git was a great inspiration for me, and where I learned about --update-refs:
You understand it correctly: it doesn’t remove any value from fixup; instead it just means you can fixup to whatever and move the commit visually to where you want it — doesn’t have to be the main/trunk branch, and you can do it whenever is convenient.
That’s where I made the comment about not actually running fixup. Instead of claiming to fix a SHA, I leave myself a note like “fix tests” so I can move it appropriately even if I get distracted for a few days
TECs are wonderful little devices with operating characteristics unlike comparable devices.
They can be designed to move a specific amount of heat or to cool at some delta-T below the hot side (and due to inefficiencies the hot side can climb above ambient temperatures too, raising the “cold side” above ambient!)
I ran through a design exercise with a high quality TEC and at 8°C delta-T for a wine cooler you could expect a COP of around 3.5–4 (theoretically). This is pretty good! But below the 2.5V max to do that you’re only able to exhaust up to around 40W. For a wine cooler this is not so bad. For a refrigerator it’s a harder challenge because the temperature drops when the door opens, and if someone sticks in a pot of hot soup, it’s important to eject that heat before it raises the temperature inside to levels where food safety becomes a problem. For a CPU it’s basically untenable under load because it’s too much heat entering the cold side thus temperatures will rise.
- Most TECs are cheap and small and come without data sheets, so people tend to become disillusioned after running them too hot.
- You have to keep the hot side cool or else the delta-T doesn’t help you. For a wine cooler this is probably no big deal: you can add a sizable fan and heat sink. For CPU cooling it becomes a tighter problem. You basically can’t win by mounting on the CPU; they are best at mediating two independent water-cooling loops.
- Q ratings are useless without performance graphs. It’s meaningless to talk about a “100W” TEC other than to estimate that it has a higher capacity than a “20W” TEC.
- Ratings and data sheets are hypothetical best cases. Reality constrains the efficiency through a thousand cuts.
When I think about TECs I think more about heat transfer than temperature drops. If you open a well-insulated wine cooler once a week then once it cools it will only need to maintain its temperature, and that requires very little heat movement. Since nothing inside is generating heat you basically have zero watts as a first-order approximation. For the same device mentioned above, it stops working below 1V, and at 8° delta-T that’s a drop in COP to around zero but it’s also nearly zero waste. If you were to maintain a constant 2.5V, however, it would continue to try and pull 40W to the hot side. This would cause the internal temperature to drop and your COP would decrease even though the TEC is using constant power. The delta-T would in fact increase until the inefficiencies match the heat transfer and everything stabilizes. In this case that’s around a 20° drop from the hot side, assuming perfect insulation.
Unlike compressors, TECs have this convenient ability to scale up and down and maintain consistent temperatures; they just can’t respond quickly and dump a ton of heat in the same way.
Several years ago I replaced the thermal paste in my MacBook Pro and I did it in two steps: first to high-end paste; and second to liquid metal.
The results were impressive, and I think it’s a bit veiled how paste degradation over time impacts perceived laptop speeds. I’ve been tempted to replace the paste on new devices but haven’t taken that plunge.
Unicode has a range of Tag Characters, created for marking regions of text as coming from another language. These were deprecated for this purpose in favor of higher level marking (such as HTML tags), but the characters still exist.
They are special because they are invisible and sequences of them behave as a single character for cursor movement.
They mirror ASCII so you can encode arbitrary JSON or other data inside them. Quite suitable for marking LLM-generated spans, as long as you don’t mind annoying people with hidden data or deprecated usage.
Can't I get around this by starting my text selection one character after the start of some AI-generated text and ending it one character before the end, Ctrl-C, Ctrl-V?
Yes, that’s correct. All of these measures, of course, stand as a courtesy and are trivial to bypass, as ema notes.
Finding cryptographic-strength measures to identify LLM-generated content is a few orders of magnitude harder than optimistically marking them. Besides, it also relies on the content producer adding those indicators so that can’t be ignored as a major source of missing metadata.
But sometimes lossy mechanisms are still helpful because people who aren’t out with malicious purposes might copy and paste without being aware that the content is generated, while an auditor (be it anyone who inspects one level deeper) can discover in some (most?) cases the source of the content.
The story of XHTML is instructive to the field of software design. There are plenty of good resources on the web if you search why did XHTML fail?
HTML parsing at least is deterministic and fully specified, whereas XHTML, as an XML, leaves a number of syntax errors up to the parser and undefined.
Conforming software may detect and report an error and may recover from it.
While fatal errors should cause all parser to reject a document outright, this also leaves the end-user without any recovery of the information they care about. So XHTML leaves readers at a loss while failing to eliminating parsing ambiguity and undefined behavior.
Interestingly, it’s possible to encode an invalid DOM with XHTML while it’s impossible to do so in HTML. That means that XML/XHTML has given up the possibility of invalid syntax (by acting like it doesn’t exist) for the sake of inviting invalid semantics.
Interesting perspective, it makes me miss XHTML wayyy less. I was under the impression that XHTML (XML) was better specified and had less weirdness. I know HTML is now better specified but some of the things inherited from HTML 4 and before make no sense to me (optional closing times SOMETIMES, optional stuff everywhere).
HTML didn’t make sense to me until I realized it’s built on a state machine and its rules are based on what’s on the stack of open elements. For example, a number of tags trigger a rule to close open P elements or list items, and many end tags trigger a rule saying something like “close open elements until you’ve closed one with the same name as this tag.”
This, IMO, is a bigger reason to avoid regex and XML parsers for HTML documents. The rules aren’t apparent when thinking linearly about what strings appear after or before each other; they become clearer when thinking of HTML as a shorthand syntax for certain kinds of push and pop operations.
XHTML is easier to parse, but for well-formed documents pushes the complexity of invalid markup into the rendering side. For example, it’s well-formed to include a button inside a button, so XHTML browsers render exactly this, but it makes no sense from a UI perspective and strange things happen when invalid markup is sent in well-formed XML.
The memory test is missing some deprecated and non-conforming elements. The HTML spec doesn’t have a single comprehensive list either, so it can be a little tricky to define or name “all” of the elements.
For example, there are elements like nextid or isindex which don’t have element definitions but which appear in the parsing rules for legacy compatibility. These are necessary to avoid certain security issues, but the elements should not be used and in a sense don’t exist even though they are practically cemented into HTML forever.
This is because the heat dumps into a liquid which is concentrated at the heat source, but as soon as it evaporates it fills the volume in which it’s contained. Also it’s because the evaporation process itself sucks out heat at a rate that’s orders of magnitude faster than via conduction, convection, or radiation alone.
They do not cool though. They rely on the fact that the heat source is relatively small and that something else can pull heat out of them fast enough to re-condense the liquid (and since they distribute the heat that cooling can attach anywhere or everywhere — this is like an embarrassingly parallel problem in software).
In a battery pack there is a lot of surface area that gets relatively and evenly hot, and little room to extract it between the cells. This would likely result in even heating around the heat pipe, which would tend to evaporate all of the liquid inside and do nothing but raise its overall temperature after an initial delay.
What it potentially could be used for is to draw heat out of the battery pack and up to some place where better airflow were possible, or for some active cooling system to extract the heat, but there are problems with scaling up like that (in typical heat pipes they manufacture wicking inner layers to draw the water back even against gravity).
At the points they could be used it’s likely significantly cheaper and easier to control some air or liquid cooling loop through the batteries.
Phones are ideal because a tiny little chip is producing almost all of the heat. It’s not even a lot, it’s just in a small area. Temperature goes up when heat can’t escape, so in this case, spreading the heat around even a few square inches can be a major factor in keeping down the temperature of the CPU/GPU.
Contrast this to EV batteries which are huge and produce evenly distributed heat already; there just isn’t the same value when things are big enough to add cooling systems, when cooling systems are necessary anyway, and when the heat pipe or vapor chamber just adds another piece too the system.