Hacker Newsnew | past | comments | ask | show | jobs | submit | SimplyUnknown's commentslogin

I'm still not sure I get it. I think it is

1. Put the BWT string in the right-most empty column

2. Sort the rows of the matrix such that the strings read along the columns of the matrix are in lexicographical order starting from the top-row????

3. Repeat step 1 and 2 until matrix is full

4. Extract the row of the matrix that has the end-delimiter in the final column

It's the "sort matrix" step that seems under-explained to me.


I think what did it for me is to recognize that the last column has characters that, for each row, come before the characters in the first column (since they are rotations). So we can view the last column as coming "before" the first one.

1. We sorted the first column to get the BWT column. Thereby we created the structure where the BWT column comes before the sorted column.

2. Therefore if we insert the BWT column before a sorted column, the order of row elements is preserved

3. If we now sort again, the order of characters across individual rows is again preserved

4. Going to step 2 again preserves row order

5. Once all columns are populated, therefore all rows are in the correct original order. And thanks to the end marker we can get the original string from any row.


I have the feeling that B-splines would be a good solution for this problem. Given that they have a continuous zeroth (i.e., the function is continuous), first, and second derivative, the motion will always be smooth and there will be no kinks. However, maybe it's moving the problem because now you must tune the coefficients of the B-spline instead of damping parameters (even though a direct mapping between these must exist but this mapping may not be trivial).


Multiple reasons, while technically better and more benign compression artifacts, it is computationally more expensive, limited quality improvements, encumbered by patents, poor Metadata format, poor colorspace support... In the end, the benefits aren't great enough compared to jpeg to change the default format


I really like einops. This works for numpy, pytorch and keras/tensorflow and has easy named transpose, repeat, and eimsum operations.


Same - I’ve been using einops and jaxtyping together pretty extensively recently and it helps a lot for reading/writing multidimensional array code. Also array_api_compat, the API coverage isn’t perfect but it’s pretty satisfying to write code that works for both PyTorch and numpy arrays

https://docs.kidger.site/jaxtyping/

https://data-apis.org/array-api-compat/


Full paper link for the interested: https://ehdijrb3629whdb.tiiny.site


"404 Sorry, this content doesn't exist."


In medical imaging, data are often acquired using anisotropic resolution. So a pixel (or voxel in 3D) can be an averaged signal sample originating from 2mm of tissue in one direction and 0.9mm in another direction.


and it’s displayed with a completely different algorithm…


Conda indeed is slow. However, mamba is a drop in replacement for Conda and uses a way faster solver, which makes it a lot more palatable.


Does it use a sat solver that has better average-case behavior, or does it sacrifice on full sat solvability?


Without fully solving it, it is impossible to install packages. This is my anecdote but I find Mamba better at solving tricky dependency requirements like certain version of Python and a certain version of Pytorch with Cuda support and a certain protobuf version.


Not quite what you are looking but if you're interested in Operation Market Garden: for the Dutch maps there is https://www.topotijdreis.nl, which gives you historical maps with a year slider. This can at least help one visualize how cities, villages, and topography at through the years.


There's also tools that wrap a part of toporijdreis and add other georeferenced historical maps! I recently saw one of those at https://geodienst.xyz/pastforward. Wish more people georeferenced historical maps, but it is tough.


CGP Grey also made an excellent video about it, which he dubbed the NaPoVoInterCo: https://www.youtube.com/watch?v=tUX-frlNBJY


But Chinese (or mandarin) is not a context-free grammar whereas I believe that encoding a language on a turing machine implies a context-free grammar so this example doesn't hold.


Well, a couple of points: its not obvious that Chinese doesn't have a context-free grammar: see the talk by David Branner: "The Grammar of Classical Chinese is Very Close to Being a Context-Free Grammar".

And a properly programmed turing machine can parse languages which are way more complex than context-free languages are.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: