Hacker Newsnew | past | comments | ask | show | jobs | submit | ChrisFoster's commentslogin

Exactly my thought too. The offset is just the zero frequency. But in general, the need to do this for the zero frequency would suggest that there's a scaling problem for all the longer frequency Fourier components? And that, perhaps, the effective spatial Fourier spectrum of noise which is used in Sd is not optimal?


ADHD is a developmental condition which is highly genetic and relates to neurotransmitter brain chemistry (Dopamine / Norepinephrine). People with ADHD have it their whole life, it's not something you can develop in adulthood. However, people can be (commonly are) diagnosed in adulthood, especially for the inattentive subtype which has less outward symptoms.

What can change or develop over time is the life circumstances that a person with ADHD finds themselves in. Circumstance can make the difference between ADHD being a disorder with significant impairments vs a joyful and creative existence.

Situations which demand executive function, like putting down that mobile phone or closing youtube in favor of doing something more productive are much harder for people with ADHD. The market built by the tech industry to transact human attention for profit certainly hasn't helped here.

Here's a good brief overview of ADHD: https://www.adhdbitesize.com/post/understand-what-adhd-is-re...

Or a bite-size youtube version https://www.youtube.com/watch?v=xMWtGozn5jU

An online ADHD test which is relatable and seems fairly accurate: https://totallyadd.com/do-i-have-add/


> it's not something you can develop in adulthood

I disagree, from my own personal experience.

According to the CDC/DSM-5, people with ADHD simply have enough of the described ADHD symptoms, and most importantly "interferes with functioning or development."

We don't yet know enough about the brain besides that people have different brain makeups and respond to different stimuli differently. We don't quite know what causes ADHD but we can group the overall symptoms together to try and treat them.

That said, plenty of the criteria in the DSM-5 can be as a result of our modern lifestyles. I talked to multiple psychiatrists/therapists who had consensus that I had ADHD. I tried all the different meds, none of them really helped. The only thing that did was changing my lifestyle. Completely eliminating some addictive habits that wrecked my response/reward circuits (porn and addictive video games) (still a work in progress but it helps). Structuring my life for more consistency and setting up a system that would prevent me from dropping into the negative ADHD habits.

I recommend reading ADHD 2.0 by Dr Halowell. Everyone's journey is different and I don't want to take away from people who get serious improvements from traditional ADHD treatments. But ADHD is a spectrum, and it's unfortunate that many of the traits that come with ADHD cause negative outcomes for our modern society, but it's really just a different functioning of the brain. Some activities exacerbate the negative outcomes, and some can reel them in. Like most other things, I believe ADHD is partly genetic and partly behavioral. The weight differs from person to person. One of those you can't control, and one you (sort of) can


As it's a developmental issue, you can't get it once your brain is fully developed.

However, your circumstances change as you become an adult and once well managed symptoms can start showing. You don't see impaired decision making when you can ask your parents about everything. You're not late when mom makes sure you get out one time. You manage all 2-3 household chores you have, but crack when you get 20-30. You manage your pocket money well enough, but not so much a full adult budget. You're fine when you know what you're supposed to be doing (the homework for tomorrow), but adult life gets you overwhelmed. Etc. Etc.


I'm not sure we're disagreeing here? As I understand it, some people have a certain kind of distractable brain and there's an underlying brain chemistry implicated here which isn't something you develop in adulthood.

However, having this brain chemistry doesn't necessarily mean a person suffers impairments in daily life: impairments are highly situational.

I suppose - to the extent that the DSM-5 requires a fair level of impariment for someone to be diagnosed with ADHD - that it's technically correct to say people can develop this in adulthood: there it passes from a syndrome to an disorder. (Personally I feel it's kind of absurd not to have a name for the syndrome if it occurs in the absence of impariment due to life circumstances! Perhaps this will be "fixed" one day as psychology slowly becomes more quantitative.)

Thanks for the book suggestion!


This is from to the history of how the Julia language was bootstrapped. The fact that Jeff happened to have written his own scheme (femtolisp) as a previous project probably helped :-)

Actually it's not just the parser but most of the compiler frontend which is written in femtolisp. It would be nice to make the frontend more accessible by replacing it with Julia code at some stage. Bootstrapping is tricky though until someone gets separate compilation working.


Femtolisp is, by-the-way, a brilliant and performant implementation of Scheme. Wish it was used outside of Julia.


That's biologically impossible, but we might end up with weeds which resemble the crop in new and interesting ways.

This has already happened in history for weeds which came under artificial selective pressure due to practices of separating seeds, etc. The fascinating thing is that some of these weeds took on so many of the desirable properties of the primary crop that they became crops in their own right. Rye and oats originally being mimicks of wheat for example: https://en.wikipedia.org/wiki/Vavilovian_mimicry


I tend to not be a fan of replies that are just "thanks" etc, but I must say here: thanks for your comment and link - that is ridiculously fascinating.


I used to think this was true (as a developer of a lot of generic Julia code and small data analysis applications).

But now as a developer of larger amounts of "application style" code, I'm not so sure. In an application, you've got control of the whole stack of libraries and a fairly precise knowledge of which types will flow through the more "business logic" parts of the system. Moreover, you'd really like static type checking to discover bugs early and this is starting to be possible with the likes of the amazing JET.jl. However, in the presence of a lot of duck typing and dynamic dispatch I expect static type inference to fail in important cases.

Static type checking brings so much value for larger scale application work that I'm expecting precise type constraints to become popular for this kind of non-generic code as the tooling matures.


Have you tried Infiltrator.jl? It's great for breaking out into an interactive REPL from some inner scope, allowing you to examine the local variables and other program state interactively. This covers the part of the functionality of pdb.set_trace() which I care about.


Yes, well put. I'd take it further - in numerical code there is no real distinction /at all/ between closed form vs iterative solutions. Every type of numerical code is subject to truncation error; an iterative method with few moving parts may systematically converge to a more accurate result than the direct expression of a closed-form solution.

It's quite interesting to delve into how special functions like `sin` and the like are actually implemented and the lengths people go to to make them "correctly rounded" (see, for example, crlibm). Even something as simple as linear interpolation between two floating point endpoints can be quite subtle if you want an implementation that is exactly correctly rounded and also fast.


> I'd take it further - in numerical code there is no real distinction /at all/ between closed form vs iterative solutions.

This assertion is however wrong. Closed form solutions at worse accumulate rounding errors of a single expression, while iterative solutions not only pile on rounding errors throughout the iterations but they also have to stop by truncating the rest of the solution, leading to results which not only is approximate but also amplifies rounding errors.


This is false, and your description of how to analyze/compare errors of iterative algorithms is also wrong. In short, if you have an algorithm with error n*eps compared to m*eps+tol, there's no a priori reason why one would be greater/equal to/smaller than another, it's just an overgeneralization (because you don't know yet what m,n,tol are). Focussing on truncating the rest of the solution is also wrong because that makes you think of only some specific types of iterative algorithms, as if they all converge linearly/sublinearly or something. In this particular paper the closed form is given as a ratio of two integrals, and both integrals are evaluated approximately using a perfectly reasonable quadrature rule, suitably chosen, and that quadrature rule has a truncation error, and that error is small enough not to matter.


This is really neat. In the past I've used similar techniques to decode binary data from a third party lidar system in parallel. In a way that the manufacturers probably didn't intend or expect.

The system generated large data files which we wanted to process in parallel without any pre-indexing. It turned out that these streams contained sync markers which were "unlikely" to occur in the real data, but there wasn't any precise framing like COBS. Regardless, the markers and certain patterns in the binary headers were enough to synchronize with the stream with a very high degree of reliability.

So for parallel processing we'd seek into the middle of the file to process a chunk of data, synchronize with the stream, and process all subsequent lidar scanlines which started in that chunk. Exactly the algorithm they describe here.

Amusingly this approach gave reasonable results even in the presence of significant corruption where the manufacturer's software would give up.


> it makes a mess of your system that is hard to clean up

This was true in the past, but the Julia ecosystem has rapidly moved away from installing anything via system package managers or messing with the system state in hard to understand ways.

Instead, everything goes into the .julia directory under the control of the Julia package manager, and the dependencies for any given project can be reproduced on another machine or OS by copying the Project and Manifest. The same goes for binary dependencies (eg, the result of compiling C/fortran/rust/go code) for which there's some amazing cross compilation tooling and build infrastructure — see https://binarybuilder.org/

At this point, I think Julia projects enjoy really first class portability and reproducibility.


Yeah I agree. Comparing Julia to the mess that is eg python environments, this seems loads better. And I think it’s a reasonable state for something where your deps might not be in the language. Maybe specifying things in a more nix-like way would be better but I think it’s pretty good as is and doesn’t pollute your system with random installations


The piston idea seems like a appealing design requiring minimal new technology. The plant is the same as existing pumped hydro plant, the weight can come from the native rock.

Using a rolling membrane for the seal is discussed at

https://heindl-energy.com/technical-concept/engineering-chal...

Naively that does seem plausible but it's a very large sheet of material to manufacture. What are your thoughts on that approach?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: