Hacker Newsnew | past | comments | ask | show | jobs | submit | Hardliner66's commentslogin

I started something similar, but my goal is to create a projectional editor for the language, where syntax is more of a presentation problem than a parsing problem.

https://github.com/hardliner66/vormbaar/


People always think that using some sort of formal verification is overkill, but then they end up hunting bugs for months that would be trivially detectable in such tools.

Formal Verification tools allow you to write a specification in a common language, that can be checked by a tool. Every time the spec changes, you can instantly verify that all invariants still hold.


It's a tooling problem. Nearly every comment I read about formal verification being overkill are about proposal to incorporate existing formal verification tools into existing workflows. It is not easy to get an organization to adopt TLA+ in a way that's useful for almost any problem.

Add a language feature like the above to TS and you'd see adoption overnight. Pretty much everyone is happy to let their build system add additional correctness guarantees if its fast enough.


Yes, lot of formal verifications are (almost) free and most others are not. But if you at least take the free ones, why not…


I wish something like Lamport's TLA+ (https://lamport.azurewebsites.net/tla/tla.html) was supported in modern language compilers - perhaps with annotations/macros and a mini formal DSL.


And types too :)

I know many words have been spilled over why it shouldn't have them, but I remain unconvinced.

I modelled a trivial traffic light system to make sure the cars and pedestrians didn't have "green" at the same time. And they didn't, because they had "green_light" at the same time. Oops!


It's still in pretty early development, but you may be interested in https://github.com/informalsystems/quint

> It combines the robust theoretical basis of the Temporal Logic of Actions (TLA) with state-of-the-art static analysis and development tooling.

And it is typed ;)


The funny thing is, that you can extend excel with c# to do basically anything. So just by having the Plugin in excel allows for c# interop


I hate take homes. I think my current company has a better approach by having new candidates do a code review of a MR we prepared. The MR is full of bugs and problems. Mixed code styles, weird commit messages, etc. And it’s not about finding every single one, but to see what the candidate is looking for to identify bad code and to see if they can review code.


Competitive programming is something I’d not put on my resume, unless the position is specifically about optimization.

If you micro-optimize every piece of software you write that will be a problem. First, it’s a tradeoff. You are going to produce code that abuses obscure methods or hard to follow semantics in order to theoretically boost performance. Why theoretically? Because your whole optimization is pointless unless your code runs as a separate application and you already profiled it on the target where it’s supposed to run.

Second, you’re wasting time for a benefit, that’s not accounted for. You are not paid to produce the fastest code known to man. You are paid to implement features. And unless the wins are huge or the optimization is a no-brainer (e.g.: string builder vs string concatenation), you’re delivering the wrong values.

So while we do care about performance (albeit a bit less than we probably should), the performance problems that we have are not solved by micro-optimizing some piece of code like it’s done in coding competitions. You need an understanding about the whole system, how parts interact and what the performance requirement is. Because even if you reduce the time it takes for your code to run by 99%, it’s of no use if the code than needs to wait for a network call anyway, making the whole effort worthless, because the overall time is the same.

So while it’s a nice exercise, it’s not what’s needed in the field. Just make sure you do the obvious optimizations and try to avoid the obvious pessimizations.


> how do I get him interested in the topic?

Does he actually want to learn programming or do you want him to want to learn programming?

If it’s the first, try programming games or give him his own little Linux box that you can put in a separate lan without internet.

If it’s the second, then you don’t. You can show him cool stuff that you can do by programming and if he likes it, he’ll start wanting to learn. And if he doesn’t, then that’s fine too. No need to force a hobby on a kid.

> I personally found scratch to be boring as hell.

It doesn’t really matter if YOU find it boring. If he’s fine with it, then it’s still a valid choice. Iirc there are even robots you can program in a scratch-like environment and it can reach plenty of the basics without all the noise from regular programming.

So just focus on making it fun for him. Either with something like scratch or screeps or maybe try building a Minecraft bot with mineslayer.


First of all, I hate that the author pitches event-driven as something different than message passing. It's the same thing, just without reply. And if you need a reply or not is irrelevant to the usage of message passing. It's also dependent on the use case if you want/need a reply or not. Do I want to authenticate a user? Then a reply might be useful. Do I send performance metrics to a server? I might not need to know if that worked or not, depending on what's defined.


Even tho I'll get flak for it, I'll call bs on the article.

It's the same as the phrase "(premature) optimization is the root of all evil". Does it mean you should never optimize? No. Does it mean you should always optimize as a last step? Also no.

Here's the full quote: "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

It's way more nuanced and basically says: "Stop wasting time on optimizations with little gain, but still do them where they are useful/necessary."

Right now I'm working on an embedded device running a small Linux. Our resource usage is well within the bounds, so thinking about better architectures or optimizations is an imaginary problem. Right? No. Not at all. Making our software smaller and/or faster doesn't only mean we can put on more software, it also means we could produce cheaper because we need less resources.

Thinking about and experimenting with different architectures or newer technologies also seems like imaginary problem solving at first, but there is a good possibility that you improve your system as well. A better architecture could make your software more maintainable or give you the flexibility to implement new features in a way, that was really cumbersome with the old code.

So while I agree with the sentiment that you should not implement things you don't need, I also think that there should be room for people to experiment and try out different things. Because sometimes, the people working with the code the longest are blind to it's many shortcomings. Because it's normal for them to just work around them. But getting rid of those shortcomings can save you hundreds of man hours of work in the long run.

To cut a long story short: Do experiment. Do think about problems you might have in the future. Do the mental exercise and think about how to improve your current code and architecture. But don't blindly implement it

Always evaluate what your trying to do. Check if it improves the things it's supposed to improve and also check if it doesn't make matters worse elsewhere. Get to know the tradeoffs and make informed decisions if changing something that works to something that's better is worth it to you.


Well, that’s basically a long way to say “it depends”. But in reality there often is a right way to do things. There is just no universally right way. And even if there is no completely right way, there are ways that are better and ways that are worse.

All in all, I think it’s a good thing to periodically challenge best practices, otherwise things can’t get better.

There might even be valid reasons to do things a certain way, because of special circumstances.

But you should still stick to best practices unless you have a reason not to. They are best practices for a reason.

Also, the notion that every single software is completely individual with a completely individual way to do things is faulty. Yes, there might be some individuality, but not enough to throw all best practices out of the water.


The example somehow reminds me of proof carrying code. If you obtain memory in pcc, you also get a proof that the memory is valid. If you want to use the memory, you need to pass the proof as well. Finally, the proof must be disposed and the only function that can do that for memory is free. This can be done for the reminder example as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: