Has it actually reached properly functional state?
The showcase video didn't look very convincing and neither website nor the discord channel contained a lot more information. Although I didn't dig through discord history too carefully.
It's one thing to hook up 200 hall effect sensors to a MCU, and read few of them or send data over HID at 8000Hz. It's different thing to read all 200 at 8000Hz and figure out the position with reasonable resolution and accuracy.
Can it also detect the exact moment pen touches tablet or additional button clicks? Or does it require taping keyboard with other hand? Which is probably fine for OSU, but less so for drawing.
Regarding the last point, pompyboard is very much a tablet or pointing device meant only for enthusiast osu! players from what I understand. No artist in the world needs a 8KHz polling rate tablet let alone 1KHz. Tablets from other brands are much better suited for drawing.
While the basic idea of a rectangle you put a pen on is the same for artists and osu! players, the more detailed requirements are basically opposites
Basically:
- Pen click is useless for osu! or can just be digital, while artists would want analog pressure
- Buttons on a pen are actively detrimental for osu! but very useful for artists
- Smoothing on a tablet is more detrimental for osu! the more of it there is but absolutely necessary for artists
- High polling rate is useless for artists (they would have input delay due to the smoothing they need either way) but very useful for osu!
- Big tablets are useless for osu! players as they typically only use a 5-15cm area while they are very useful for artists
I think the entire point of something like pompyboard is to make a tablet just for osu!, which doesn’t exist right now. Meanwhile for artists there is already a whole industry of tablets available for them
Except your giant comment doesn't actually explain why it used uint64. Only place mentioning uint64 is integer promotion which only happens because you used 64bit integer, thus no explanation of why.
Was it done because shifting by amount equal or greater to integer width is undefined behavior? That would still not require storing result in 64bit mask, just shifting (~0ULL) would be enough. That would be a lot more valuable to explain than how bitwise AND works.
The first one also seems slightly sketchy but without knowing rest of details it's hard to be sure. IPV6 address is 128bits, that's 2 registers worth integers. Calculating base address would take 2 bitwise instruction. Cost of copying them in most cases would be negligible compared to doing the lookup in whatever containers you are searching resulting address. If you are storing it as dynamically allocated byte arrays (which would make copying non trivial) and processing it in such a hot loop where it matters, then seems like you have much bigger problems.
For my taste it would be sufficient to say "Iterate in reverse order from most specific address to least specific. That way address can be a calculated in place by incrementally clearing lowest bits." Having 2 paragraphs of text which repeat the same idea in different words is more distracting than it helps.
Sorry, I didn't explain uint64 was used. I wrote this many years ago so my memory was foggy, but I went through a few iterations using uint32 only to use branches for the masks. This was the only branchless way I could come up with at the time after a few attempts. I think the example was more to demonstrate that the algorithm was correct and I wasn't going to unit test it at that scope.
As for the 128-bit addresses, we used boost ip::address_v6 to_bytes() as it appears there was no masking option.
For my taste it would be sufficient to say "Iterate in reverse order from most specific address to least specific. That way address can be a calculated in place by incrementally clearing lowest bits." Having 2 paragraphs of text which repeat the same idea in different words is more distracting than it helps.
Ah apologies, too late now, should've mentioned it in the PR. But I expected it would ruffle some feathers, I don't care for conventions or other people's quibbles. As long as it improves understanding for the reader, regardless of how "redundant", then mission accomplished.
If you had read the link you posted you would know those are various historic proposals for language reforms not something that's widely used. Same way there have been various proposals for English spelling reforms. That's not the normal way for writing or reading Russian. Russian uses Cyrilic alphabet which if anything is closer to Greek alphabet than Latin. There are other slavic langauges which actually use Latin based scripts. If you don't read Russian it might look like some of the letters look similar but that's only less than half of alphabet and from those half have completely different meaning than Latin lookalikes.
Yes there are various schemes for transliterating Russian into latin script, which people occasionally use for various reasons like typing on a computer or phone which hasn't been fully configured for use with Russian language, in contexts where unicode isn't supported or to make street signs legible for tourists. That's different from the "Russian Latin alphabet". In most cases where proper Cyrillic is problematic dedicated "Russian Latin alphabet" that's based on Latin with extra diacritic marks would also be problematic.
Similar thing could be said about other languages like Japanese or Chinese, but I don't think anyone would describe them as "languages that use the Latin alphabet".
As for typing on keyboard the main Russian layout is nothing like qwerty. Computer keyboards sold in relevant regions often have dual labels. I personally never learned touch typing in Cyrillic and use the phonetic layout in the rare cases I need to do so since for me it was a second foreign language.
Which exact approach Click chose - who knows. Will it be possible to choose your preferred Russian layout like on a desktop computer? Likely not. If they supported that I would have expect them to also add layouts for more languages. Although maybe they didn't want to promise anything for languages for which they don't have OS UI translations.
Fortunately, I then assumed that I knew nothing and asked anyways. I'm glad I did — this thread is now much more interesting than the one-word comment conveyed to me at first.
beach balls just cause havoc bouncing around and potentially knocking things over. they're a nuisance.
blankets tend to want to be laid out on the floor for people to sit on which takes up a lot of space causing havoc for foot traffic when people are not expecting to have to step over someone. also, they can be used to start fires. these are the same reasons they are no longer allowed at outdoor concert venues for specific types of shows.
Few more additional ones, more about editing than just rendering:
The style change mid ligature has a related problem. While it might be reasonable not to support style change in the middle of ligature, you still want to select individual letters within ligatures like "ff", "ffi" and "fl". The problem just like with color change is that neither the text shaper nor program rendering text knows where each individual letter within ligature glyph is positioned. Font simply lacks this information.
From what I have seen most programs which support it use similar approximation as what Firefox uses for coloring - split the ligature into equal parts. Works good enough for something like "fi", "fl" not so much for some of ligatures within programming fonts that combine >= into ≥.
There are even worse edge cases in scripts for other languages. There are ligatures which look roughly like the 2 characters which formed it side by side but in reverse order. There are also some ligatures in CJK fonts which combine 4 characters in a square.
Backspace erases characters at finer granularity than it's possible to select them.
With regards to LTR/RTL selection weirdness I recently discovered that some editors display small flag on the cursor displaying current position direction when it's in mixed direction text.
A technical note: OpenType Layout does have a way of representing the appropriate _cursor positions_ to use for components of a ligature[1], which is a good proxy for where the individual glyph boundaries are in the trivial case (fi and fl, say) but these tables are not reliably included in all fonts, and they are not actually used by much client software (last I checked they were used by CoreText but not by HarfBuzz or DirectWrite.)
The user may not think of the letters as connected. Suppose the user wanted to write "stuffing" and bold the letters "ing". The user may well not realize that the font thinks of "ffi" as anything other than three separate letters.
Ligatures like in "stuffing" isn't the worst case for mid ligature styling. You could introduce a split between stuff and ing preventing the forming of ligature and it would likely still look reasonable. That's actually one of the most straight forward things you can do for text layout, split text into runs with same style and then shape each run separately. That's also how you end up with mess shown in Safari screenshot.
In non English scripts where ligatures are less optional things are trickier. Not applying the ligature can significantly affect the look. In some fonts/scripts where ligatures are used for diacritic marks or syllable based combinations of characters into single glyph.
Another aspect of mid ligature color changes is that if you allow color you probably allow any other style change including font size or the font itself, which in turn can have completely different size and shaped glyph for the corresponding ligature and even different set of ligatures. Thus making drawing of corresponding characters as single ligature impossible.
One of the most warranted and also one of the trickiest cases for wanting mid ligature style change is language education materials. You might want to highlight individual subcomponents of complex character combinations to explain rules behind them. For these cases the firefox splitting hack is not good enough. Although it seems like in current version of Firefox on Linux न्हृे is handling much better than in 2019 screenshot. This might be as much as improvement in Firefox and underlying libraries as it was in font. At the end of day if font draws a complex character combination as single shape there is nothing font rendering software can do to correctly split into logical components. Instead of ligatures you can draw such characters as multiple overlapping and appropriately placed glyphs (possibly in combination with context aware substitutions). Kind of like zalgo text, no font has separate glyphs for each letter with every combination of 20 stacked diacritic marks. That way the information about components isn't lost making it technically possible to correctly style each of them, but it's still not easy.
You might want to take a look at https://diffusionillusions.com/ . You don't need a specialized models, little bit of traditional code for enforcing constraints on top of general purpose models can do quite a bit.
Partial zip shouldn't be totally useless and a good unzip tool should be able to repair such partial downloads. In addition to catalog at end zip also have local headers before each file entry. So unless you are dealing with maliciously crafted zip file or zip file combined with something else, parsing it from start should produce identical result. Some zip parsers even default to sequential parsing behavior.
This redundant information has lead to multiple vulnerabilities over the years. As having redundant information means that a maliciously crafted zip file with conflicting headers can have 2 different interpretations when processed by 2 different parsers.
More like HTML and getting different browsers to render pixel perfectly identical result (which they don't) including text layout and shaping. Where different browser don't mean just Chrome, Firefox, Safari but also also IE6 and CLI based browsers like Lynx.
PDFs at least usually embed the used subset of fonts and contain explicit placement of each glyph. Which is also why editing or parsing text in PDFs is problematic. Although it also has many variations of Standard and countless Adobe exclusive extensions.
Even when you have exactly the same font text shaping is tricky. And with SVGs lack of ability to embed fonts, files which unintentionally reference system font or a generic font aren't uncommon. And when you don't have the same font, it's very likely that any carefully placed text on top of diagram will be more or less misplaced, badly wrap or even copletely disappear due to lack of space. Because there is 0 consistency between the metrics across different fonts.
The situation with specification is also not great. Just SVG 1.1 defines certain official subsets, but in practice many software pick whatever is more convenient for them.
SVG 2.0 specification has been in limbo for years although seems like recently the relevant working group has resumed discussions. Browser vendors are pushing towards synchronizing certain aspects of it with HTML adjacent standards which would make fully supporting it outside browsers even more problematic. It's not just polishing little details many major parts that were in earlier drafts are getting removed, reworked or put on backlog.
There are features which are impractical to implement or you don't want to implement outside major web browsers that have proper sandboxing system (and even that's not enough once uploads get involved) like CSS, Javascript, external resource access across different security contexts.
There are multiple different parties involved with different priorities and different threshold for what features are sane to include:
- SVG as scalable image format for icons and other UI elements in (non browser based) GUI frameworks -> anything more complicated than colored shapes/strokes can problematic
- SVG as document format for Desktop vector graphic editors (mostly Inkscape) -> the users expect feature parity with other software like Adobe Illustrator or Affinity designer
- SVG in Browsers -> get certain parts of SVG features for free by treating it like weird variation of HTML because they already have CSS and Javascript functionality
- SVG as 2d vector format for CAD and CNC use cases (including vinyl cutters, laser cutters, engravers ...) -> rarely support anything beyond shapes of basic paths
Beside the obviously problematic features like CSS, Javascript and animations, stuff like raster filter effects, clipping, text rendering, and certain resource references are also inconsistently supported.
From Inkscape unless you explicitly export as plain 1.1 compatible SVG you will likely get an SVG with some cherry picked SVG2 features and a bunch of Inkscape specific annotations. It tries to implement any extra features in standard compatible way so that in theory if you ignore all the inkscape namespaced properties you would loose some of editing functionality but you would still get the same result. In practice same of SVG renderers can't even do that and the specification for SVG2 not being finalized doesn't help. And if you export as 1.1 plain SVG some features either lack good backwards compatibility converters or they are implemented as JavaScript making files incompatible with anything except browsers including Inkscape itself.
Just recently Gnome announced working on new SVG render. But everything points that they are planning to implement only the things they need for the icons they draw themselves and official Adwaita theme and nothing more.
And that's not even considering the madness of full XML specification/feature set itself. Certain parts of it just asking for security problems. At least in recent years some XML parsers have started to have safer defaults disabling or not supporting that nonsense. But when you encounter an SVG with such XML whose fault is it? SVG renderer for intentionally not enabling insane XML features or the person who hand crafted the SVG using them.
Industrial cooler manufacturers and DC PR teams have their ways to greenwash the truth.
"40% of data centers are using evaporative cooling" doesn't mean that other 60% are fully closed loop water to air coolers or what would be called "dry cooling systems" by the manufacturers. The other 60% could be "adiabatic coolers" or "hybrid coolers" or if data center is close to large body of water/water heat exchangers, where 2/3 of those still depend on evaporating water, but the manufacturers would put them in separate category from evaporative coolers.
Just took a looked at offering of one of the industrial cooler manufacturers. They had only 1 dry cooler design, compared to a dozen more or less evaporative ones. And even that one was advertised as having "post install bolt-on adiabatic kit option". Which feels like a cheat to allow during initial project and build claim that you are green by using only dry coolers, but after the press releases are done, grant money collected and things are starting to operate at full capacity, attach sprinklers to keep the energy costs lower.
Often it's less about learning from the bugfix itself but the journey. Learning how various pieces of software operate and fit together, learning the tools you tried for investigating and debugging the problem.
The showcase video didn't look very convincing and neither website nor the discord channel contained a lot more information. Although I didn't dig through discord history too carefully.
It's one thing to hook up 200 hall effect sensors to a MCU, and read few of them or send data over HID at 8000Hz. It's different thing to read all 200 at 8000Hz and figure out the position with reasonable resolution and accuracy.
Can it also detect the exact moment pen touches tablet or additional button clicks? Or does it require taping keyboard with other hand? Which is probably fine for OSU, but less so for drawing.
reply