Hacker Newsnew | past | comments | ask | show | jobs | submit | nsajko's commentslogin

> the culture of not prioritizing correctness in computation

On the contrary, it is my impression the experienced Julia programmers, including those involved in JuliaLang/julia, take correctness seriously. More so than in many other PL communities.

> there are people working on traits/interfaces - but these are still peripheral projects and not part of the core mission to my knowledge

What exactly do you mean by "traits" or "interfaces"? Why do you think these "traits" would help with the issues that bug you?


True, actually they are good in following up on numerical correctness, so I should be rephrase 'correctness in computation' to 'correctness in composition' - the types of bugs that arise from mashing a lot of modules together. On the one hand it's not a Julia issue but a package ecosystem issue.

I think you're actually even more active in the Julia community so maybe I don't have to summarize the debate but these are the types of traits and interface packages being developed that are meant to formalize how modules can be used and extended by others.

https://github.com/rafaqz/Interfaces.jl

https://discourse.julialang.org/t/interfaces-traits-in-julia...


What I wanted to say is that I'm skeptical regarding "interfaces", either as a language feature or as a package. Although TBH I have not yet given any specific "interfaces" design more than a cursory glance, so my position is not really justified.


The Julia world is already quite careful with testing and CI. Apart from the usual unit testing, many packages do employ integration testing. The Julia project itself (compiler, etc) is tested against the package ecosystem quite often (regularly and for select pull requests).


I certainly didn't mean to imply that Julia's community was incompetent or that they were not doing integration testing. CRAN's approach (which is mandatory integration testing against all known dependents enforced by the packaging authority - the global and mandatory nature being what makes it different) is genuinely innovative and radical. I don't think that's an approach that should be adopted lightly or by most ecosystems, but I do observe that a.) these languages have similar goals and b.) it's an approach intended to solve problems of much the same shape as described in the article.

Again I think this approach is too radical for most ecosystems, but Julia is pursuing a similarly radical level of composability/reusability and evidently encountering difficulties with it, so I think there may be a compatibility there.


I don't think testing against every existing dependent would make sense currently. The issue is the lack of tooling for mechanically checking whether the dependent accesses implementation details of the dependency, in which case it would be valid for the dependency to break the dependent.

There are some proposals to forbid the registration of a package release which trespasses on the internals of another package, though.

I hope someone tackles the above sooner or later, but another issue is the approach of testing every known dependent package might be very costly, both in terms of compute and manual labor, the latter because someone would have to do the work of maintaining a blacklist for packages with flaky unit tests. The good news is that this work might considerably overlap with the already existing PkgEval infrastructure. We'll see.


Julia is not without warts, but this blog post is kinda rubbish. The post claims vague but scary "correctness issues", trying to support this with a collection of unrelated issue tickets from all across Julia and the Julia package ecosystem. Not all of which were even bugs in the first place, and many of which have long been resolved.

The fact that bugs happen in software should not surprise anyone. Even software of critical importance, such as GCC or LLVM, whose correctness is relied upon by the implementations of many programming languages (including C, C++ and Julia itself), are buggy.

Instead the post could have focused more on actual design issues, such as some of the Base interfaces being underspecified:

> the nature of many common implicit interfaces has not been made precise (for example, there is no agreement in the Julia community on what a number is)

The underspecified nature of Number (or Real, or IO) is an issue, albeit not related with the rest of the blog post. It does not excuse the scaremongering in the blog post, however.


Number isn’t an interface—there are no operations common to all numbers. Subtyping Number is a way to opt into numeric promotion and a few other useful generic behaviors. That’s it. The fact that some abstract types are interfaces with expected behaviors, while others are dispatch points to opt into behaviors is a double edged sword: powerful and flexible, but only explicitly expressed/explained in documentation.


> Number isn’t an interface—there are no operations common to all numbers.

When creating a new type, it should be more clear cut when is subtyping Number (or Real, etc.) valid. Should unitful quantities be numbers? Should intervals be numbers? Related: I think there are some attempts by Tim Holy and others to create/document "thick numbers".

Furthermore, I believe it might be good to align the Number type hierarchy with math/abstract algebra as much as possible without breaking backwards compatibility, which might making Number, or some subtypes of it, actual interfaces.

> Subtyping Number is a way to opt into numeric promotion and a few other useful generic behaviors. That’s it.

OK, but I think that's not documented either.


Yeah, probably should be documented.


> it is plausible that ChatGPT can get to a state where it can act as a good therapist

Be careful with that thought, it's a trap people have been falling into since the sixties:

https://en.wikipedia.org/wiki/ELIZA_effect


Eventual plausibility is a suitably weak assertion, to refute it you would have to at least suggest that it is never possible which you have not done.


I dunno, I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis. Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted. Maybe it gets a bit easier to ask questions that you feel are stigmatized, as you know (think) no other human will see it, just the robot that doesn't have feelings nor judge you.

I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?


> I feel like most people (probably not the typical HN user though) don't even think about their feelings, wants or anything else introspective on a regular basis.

Well, two things.

First, no. People who engage on HN are a specific part of the population, with particular tendencies. But most of the people here are simply normal, so outside of the limits you consider. Most people with real social issues don’t engage in communities, virtual or otherwise. HN people are not special.

Then, you cannot follow this kind of reasoning when thinking about a whole population. Even if people on average tend to behave one way, this leaves millions of people who would behave otherwise. You simply cannot optimise for the average and ignore the worst case in situations like this, because even very unlikely situations are bound to happen a lot.

> Maybe having something like ChatGPT available could be better than nothing, at least for people to start being at least a bit introspective, even if it's LLM-assisted.

It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.

À LLM regresses towards the mean of its training set. For people who are already outside the main mode of the distribution, this is completely unhelpful, and potentially actively harmful. By design, a LLM won’t follow a path that was not beaten in its training data. Most of them are actually biased to make their user happy and validate what we tell them rather than get off that path. It just does not work.

> I agree that it probably won't replace a proper therapist/psychologist, but maybe it could at least be a small step to open up and start thinking?

In my experience, not any more than reading a book would. Future AI models might get there, I don’t think their incompetence is a law of nature. But current LLM are particularly harmful for people who are in a dicey psychological situation already.


> It is worse than nothing. A LLM does not understand the situation or what people say to it. It cannot choose to, say, nudge someone in a specific direction, or imagine a way to make things better for someone.

Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.

> In my experience, not any more than reading a book would.

But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take. Contrast that to talking with a LLM that won't remember you nor judge you.

Edit:

> Most people with real social issues don’t engage in communities, virtual or otherwise.

Not sure why you're focusing on social issues, there are a bunch of things people deal with on a daily basis that they could feel much better about if they even spent the time to think about how they feel about it, instead of the typical reactionary response most people have. Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.


I think people are getting hung up on comparisons to a human therapist. A better comparison imo is to journaling. It’s something with low cost and low stakes that you can do on your own to help get your thoughts straight.

The benefit from that perspective is not so much in receiving an “answer” or empathy, but in getting thoughts and feelings out of your own head so that you can reflect on them more objectively. The AI is useful here because it requires a lot less activation energy than actual journaling.


> Right, no matter if this is true or not, if the choice is between "Talk to no one, bottle up your feelings" and "Talk to an LLM that doesn't nudge you in a specific direction", I still feel like the better option would be the latter, not the former, considering that it can be a first step, not a 100% health care solution to a complicated psychological problem.

You’re right, I was not clear enough. What would be needed would be a nudge in the right direction. But the LLM is very likely to nudge in another because that’s what most people would need or do, just because that direction was the norm in its training data. It’s ok on average, but particularly harmful to people who are in a situation to have this kind of discussion with a LLM.

Look at the effect of toxic macho influencers for an example of what happens with harmful nudges. These people need help, or at least a role model, but a bad one does not help.

> But to even get out in the world to buy a book (literally or figuratively) about something that acknowledges that you have a problem, can be (at least feel) a really big step that many are not ready to take.

Indeed. It’s something that should be addressed in mainstream education and culture.

> Not sure why you're focusing on social issues,

It’s the crux. If you don’t have problems talking to people, you are much more likely to run into someone who will help you. Social issues are not necessarily the problem, but they are a hurdle in the path to find a solution, and often a limiting one. Besides, if you have friends to talk to and are able to get advice, then a LLM is even less theoretically useful.

> Probably every single human out there struggle with something, and are unable to open up about their problems with others. Even people like us who interact with communities online and offline.

Definitely. It’s not a problem for most people, who either can rationalise their problems themselves with time or with some help. It gets worse if they can’t for one reason or another, and it gets worse still if they are mislead intentionally or not. LLMs are no help here.


I think you're unreasonably pessimistic in the short term, and unreasonably optimistic in the long term.

People are getting benefit from these conversations. I know people who have uploaded chat exchanges and asked an LLM for help understanding patterns and subtext to get a better idea of what the other person is really saying - maybe more about what they're really like.

Human relationship problems tend to be quite generic and non-unique, so in fact the averageness of LLMs becomes more of a strength than a weakness. It's really very rare for people to have emotional or relationship issues that no one else has experienced before.

The problem is more that if this became common OpenAI could use the tool for mass behaviour modification and manipulation. ChatGPT could easily be given a subtle bias towards some belief system or ideology, and persuaded to subtly attack competing systems.

This could be too subtle to notice, while still having huge behavioural and psychological effects on entire demographics.

We have the media doing this already. Especially social media.

But LLMs can make it far more personal, which means conversations are far more likely to have an effect.


Julia does this for parametric types, too.


> whiffed a few major decisions early on

Anything particular in mind?


The always on jit was a big mistep (IMO, the opt-in torchscript model is much better). I tried a julia a few times and it was just too slow to be usable for anything remotely exploratory. Every year or so, I'd read "TTFP has been improved", so I'd try again and it was still slow as mollasas in siberia. I suspect a lot of people had that experience and will be hard pressed to give julia a real shot at this point, even it it does/has fix the problem.


In general, I’d say there’s too much superficial flexibility but not enough control.

- I wrote this elsewhere: I find their approach to memory management/mutable arrays really hits the worst of both worlds (manual memory management and garbage collection). You end up trying to preallocate memory but don’t actually have control over memory allocations. I find the dynamic type system exacerbates this.

- It’s a very big language, even in the IR. So proper program transforms like mapping functions or autograd are quite difficult to implement.

- Static compilation is really hard, which makes it a non-starter for a lot of domains where it could have made inroads (robotics, games, etc).



> Recompiling everything every time.

> recompilation on every run breaks this

Your comment is exceedingly misleading. Whether and when Julia code gets compiled is up to the user.


> Pkg.jl is also not great, version compatibility is kind of tacked on and has odd behavior.

Huh? I think Pkg is very good as far as package managers go, exceptionally so. What specifically is your issue with it?


Pratt's method only targets the operator precedence languages, not the DCFL. So much less powerful than LR parsing.


That's true as Pratt described it. I mentioned it because it's a good example of the general idea of extending recursive descent to handle more deterministic grammars than vanilla LL.


Is Trump not supposed to be tough-on-crime? How does pardoning a drug dealer factor into that? Is Trump against the war on drugs?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: