Hacker Newsnew | past | comments | ask | show | jobs | submit | jharsman's commentslogin

This is very true. There several reasons why most EHRs are so bad:

1) The people who pay generally do not use the system. This is true for enterprise software in general and leads to vendors prioritizing having all features organizations ask for (regardless if they are a good idea or not) and also prioritizing features management deems important over fundamental workflow, UX and polish in general.

2) EHRs are very large and complex and can almost always gain more customers by gaining even more features and replacing smaller more specialized systems. A typical EHR will have features for ordering tests and viewing results (for clinical chmistry, microbiology, radiology and more special stuff like physiology etc), appointments and resource planning (rooms, equipment, personnel, staffing), clinical notes including computing scores and values based on other values, medication (ordering, administering, sending prescriptions electronically) and administration (admissions, discharge, payment, waiting lists). That is a lot of different stuff!

3) Once a vendor wins a contract and installs their EHR, very little can be gained by improving the lives of users. Contracts and sales cycles are very long, and the vendor gains very little financially by improving the system. So many vendors are focused on charging money for customer specific features or adding new features to win new tenders.

I'm not sure what the solution is, public alternatives have failed spectacularly since they are typically run by public administrators who have even less of a clue how to develop software and what users want than the vendors.


> So many vendors are focused on charging money for customer specific features or adding new features to win new tenders.

In turn, this enterprisey anti-pattern creates unfocused products which can be configured to sort-of-solve every niche customer requirement that might block the sale.

The result is a massive ball of muddy configurations and feature-flags, so that learning isn't very portable and backend integrations are very painful.


I would add the point that these dynamics are also present for any large IT system. Just search for people having issues migrating to SAP or Amadeus or whatever


Yes, the standard solution to this is to use the curl if the scalar valued noise field. This gives you a vector field which is perpendicular to the gradient and i divergence free.



Awesome, thanks for the links!


I was an early adopter of Mercurial and the teams insistence that file names were byte strings was the cause of lots of bugs when it came to Unicode support.

For example, when I converted our existing Subversion repository to Mercurial I had to rename a couple of files that had non ASCII characters in their names because Mercurial couldn't handle it. At least on Windows file names would either be broken in Explorer or in the command line.

In fact I just checked and it is STILL broken in Mercurial 4.8.2 which I happened to have installed on my work laptop with Windows. Any file with non ASCII characters in the name is shown as garbled in the command line interface on Windows.

I remember some mailing list post way back when where mpm said that it was very important that hg was 8-bit clean since a Makefile might contain some random string of bytes that indicated a file and for that Makefile to work the file in question had to have the exact same string of bytes for a name. Of course, if file names are just strings of bytes instead of text, you can't display them, or send them over the internet to a machine with another file name encoding or do hardly anything useful with them. So basic functionality still seems to be broken to support unix systems with non-ascii filenames that aren't in UTF-8.


> the teams insistence that file names were byte strings was the cause of lots of bugs when it came to Unicode support

File names are a different problem because Windows and Unix treat them differently: Unix treats them as bytes and Windows treats them as Unicode. So there is no single data model that will work for any language.


The Rust standard library has a solution for this that actually works: On Unix-like systems file paths are sequences of bytes and most of the time the bytes are UTF-8. On Windows, they are WTF-8, so the API users sees a sequence of bytes and most of the time they match UTF-8.

This means that there's more overhead on Windows, but it's much better to normalize what the application programmer sees across POSIX and NT while still roundtripping all paths for both than to make the code unit size difference the application programmer's problem like the C++ file system API does.


> On Windows, they are WTF-8

Seems like an apt acronym for Windows... :-)

On a more serious note, Python seems to have done something fairly similar with the pathlib standard library module.


Not to mention case-sensitivity issues. Can you have two files, one named "FILE.txt" and the other "file.txt" in the same directory for instance?


On windows? Of course you can.


I'm certain you can on Linux as well. Only Macs old HFS would not allow it.


Isn't this a fairly recent change?


NTFS has always been case sensitive, Windows API just lets you treat it as case insensitive. If you pass `FILE_FLAG_POSIX_SEMANTICS` to `CreateFile` you can make files that differ only in case.


Good luck using those in some tools which use the API differently though. Windows filenames are endless fun. What's the maximum length of the absolute path of a file? Why, that depends on which API you're using to access it!


Even worse on Unix where it depends on the mount type. Haven't seen much proper long filename support in Unix apps or libs, it's much better in Windows land. Garbage in garbage out is also a security nightmare as names are not identifiable anymore. You can easily spoof such names.


Hum, any program that doesn't treat filenames as bytestreams on unix is broken. Doubly so if its primary purpose is preserving and archiving files.

Are you sure the issue wasn't something else?


The point is that filenames aren't bytestreams on windows, and if you treat them as such then your program won't work.


By this point, any cross-platform file tool that isn't using Unicode as a lowest-common denominator for filenames and similar things to insure maximal compatibility is likely ready to cause havoc.

(The remarks in the post here that Mercurial on Python 3 on Windows is not yet stable and showing a lot of issues is possibly even an indicator/canary here. To my understanding, Python 2 Windows used to paper over some of these lowest common denominator encoding compatibility issues with a lot more handholding than they do with the Python 3 Unicode assumption.)


> By this point, any cross-platform file tool that isn't using Unicode as a lowest-common denominator for filenames and similar things to insure maximal compatibility is likely ready to cause havoc.

Be that as it may, Mercurial has existing repositories that may use non-unicode filenames, and just crashing whenever you try to operate on them is probably not an acceptable way forward.


Sure, but that's also not the only resulting option; instead of erroring you could also do something nice like help those users migrate to cleaner Unicode encodings of their filenames by asking them to correct mistakes or provide information about the original encoding. It takes more code to do that than just throwing an error, of course, but who knows how many users that might help that don't even realize why their repositories don't work correctly on, say, Windows.


Windows filenames basically are bytestreams. But the bytes come in pairs.


Not really. Certain byte sequences are invalid.


Certain byte sequences are invalid in unix filenames too. So that can't be the factor that decides if they are bytestreams or not.


If hg borked on non-ascii characters, it sounds like the problem was rather that it didn't treat that data as a bag-of-bytes. Not the other way around?


He was trying to use Windows. For Windows, you pretty much have to go through unicode to utf-16, can't be arbitrary bytes, can't be utf8.

(I think that relatively recently it is possible to use utf8 with some new windows interfaces ... but this is probably not widely compatible with older windows releases ...)


Windows uses arbitrary shorts that are sort of supposed to be utf-16. Just like Unix uses arbitrary bytes that are sort of supposed to be utf-8.

You have to convert between them, but neither uses proper Unicode to represent filenames.


Yeah, but utf-16 is still bytes. It's just bytes with a different encoding.

But I do see the pain with Python 3 where the runtime tries to hide these kinds of issues from you. That abstraction can make it difficult to have the right behaviour.


Everything is bytes but the meaning assigned to bytes, matters. Let’s say I create a file named «Файл» on Unix in UTF8 and put it into git repo. For Unix it is a sequence of bytes that is representation of Russian letters in UTF8. So far so good. Now I clone this repo to Windows, what should happen? The file can not be restored with the name as encoded into bytes on Unix, that will be garbage (that even has a special name “Mojibake”) in the best case or fail outright in the worst. What should happen is decoding of those bytes from UTF8 (to get original Unicode code points) Into Unicode code points, then encoding using Windows native encoding (UTF-16).


True, but one of those representations still needs to be canonical one in the repo for the purposes of hashing into the commits and so on.

Git builds a bunch of logic like this in around handling line endings in text files.


Everything isn't bytes. Strings without an encoding don't have a specific byte representation.


It's the other way around. Strings always have meanings and always reference the same characters. You use encoding to encode strings into bytes.

Bytes without encoding, don't have any meaning, they are just... random bytes.


We're actually saying the same thing. You're saying without an encoding you can't turn bytes into a string (technically, in Python terminology, that's a decoding, but you know... ;-). I'm saying a string doesn't have a byte representation without an encoding. That's two perspectives on the same truth.

I absolutely agree that a string has meaning without a byte representation. That's the whole point of having it as a distinct type.


UTF-16 is not "just bytes". There are sequences of bytes that are not valid UTF-16, so if you want to roundtrip bytes through UTF-16 you have to do something smarter than just pretending the byte sequence is UTF-16.


Sorry, I wasn't trying to imply that any permutation of bytes would work. If you encode it improperly, it's not going to work.


Dyalog launches multiple threads for certain operations if the arrays are large enough to justify the overhead.


That seems like the right approach for CPU-targeted code. Has an APL descendant ever been created to target a GPGPU compute-kernel, or even to compile to an FPGA netlist description of a DSP?


Aaron Hsu’s co-dfns is a compiler of a subset of APL, written in APL, which compiles to GPU code.

Do a HN Algolia search for his username “arcfide” to find a lot of discussion, and there’s a couple of YouTube video talks, one of him talking through the codebase on a livestream, another a recording of a conference style tall introducing it to people.

It needs Dyalog APL, but that’s now easily available for non commercial use.



That's not how lossless compression of JPEGs work.

Besides removing information from the file that doesn't affect the rendered image (like EXIF data), lossless recompressors typically replace the huffman coding of DCT coefficients with a more efficient arithmetic coder. So you don't start over from raw pixels, but you replace the type of compression used with a more modern and efficient algorithm. That means ordinary software can't read the JPEG (since you've essentially created a new format) but you can just decompress into standard JPEG whenever someone wants to look at the image.


> Besides removing information from the file that doesn't affect the rendered image

You can do this if the goal is pixel perfect accuracy, but Flickr can’t do this since they have “a long-standing commitment to keeping uploaded images byte-for-byte intact”…


I bet a lot of those ICC color profiles are the same across many images though... One you could strip the metadata and keep it in a separate deduplicated database, and then reassemble when the user accesses the file.


Yes, the actual burning fuel part is just random noise, which doesn't look very very good. I mention it as a possible imrpovement under "Better looking fuel".


I can't get your example code to work, but that is a completely different technique, ray marching a volume displaced by a noise function. This gives nice 3D-looking flames, but the movement tends to look like a scrolling noise function. And it's harder to use arbitrary burning shapes, my simulation supports drawing anything and it will burn.


Sorry, this version wont just paste back as is in shadertoy. Also it needs a random 256x256 bitmap in channel 0. The effect is the same as ozzy's original, except change ITERATIONS to 13 in his code to see the change (which is only slightly better). The other changes were for performance, which i improved slightly.


Traditionally TVs only support 60 Hz refresh rates (or 50 Hz for older PAL sets), so you either render a new frame for each frame the TV can refresh, or you display a frame for two TV refreshes.

Thi sisn't strictly true any more, since many TVs now support 72 Hz (to be able to display 24 fps content like film), but my guess is that doesn't have wide enough support to rely on.


I find it really weird that the article says:

> It’s curious that we have low cache hit rates, a lot of time stalled on cache/memory, and low bandwidth utilization.

That's typical for most workloads! Software is almost never compute or bandwidth bound in my experience, but instead spends most of its time waiting on memory in pointer chasing code. This is especially true for code written in managed languages like Java (since everything typically is boxed and allocated all over the heap).


It's curious that there's low bandwidth utilization despite the low but rates / cpu stalled waiting for memory. Don't you think?

Perhaps random access of small data causes frequent waits without utilizing the bandwidth in a way that block copies would.


You don't get high bandwidth utilization by pointer chasing unless you have many threads doing it and you switch threads while waiting on memory. That's true for GPUs, not for typical server workloads running on CPUs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: