Hacker Newsnew | past | comments | ask | show | jobs | submit | kemenaran's commentslogin

Congrats for knowing what's important to you (time and peace of mind, from what I read), and acting on it.

I work as a consultant, but for the same reasons: I choose to work only a few days a week, take the salary loss, and spend more time with my young kids. A time will come where I'll start working more, which will also unlock more meaningful work – but for now that's a choice I'm happy to make.


Thanks! And happy to hear you're doing that!

I also asked for a 3-day work week from my last company when I wanted to try and make Lunar paid. It's incredible how much your life changes when you're not defined by your job anymore.

Because you spend more time (4 days) doing some other thing than your job (3 days), you really start seeing what's actually important, and the question "what's your job" no longer has a straightforward answer.


This ^^. What we call "work" gets totally redefined in later years. That flip makes you realise, you are not your job and your job is just a means to an end.


As noted by the other investigating organization (La Quadrature du Net), the problem is worse than just the question of the algorithm implementation.

The problem is that the CNAF deliberately targets small unintentional errors rather than large-scale intentional fraud.

Why that? Because fraud is harder to detect and to prove (you have to provide evidence it was intentional).

So smaller and poorer families are disproportionately affected by the controls, because the algorithm was designed to do so. No computational adjustment can fix that: it is the initial intent that is broken.

Source: https://www-laquadrature-net.translate.goog/2023/11/27/notat... (auto-translated in english)


> The problem is that the CNAF deliberately targets small unintentional errors rather than large-scale intentional fraud.

The French welfare system is incredibly complex - see for instance this [0] simplified description of housing allowances which is 80 (!) pages long. This is not the most complex part of the system. With such a system, there are massive amounts of errors, both too-much-money-given and not-enough-money-given. The scale is so large that the French Court of Accounts refused to certify the CNAF accounts last year[1]: thoses errors represent about 7.5% of the CAF budget.

So basically the probability to have an error is just a function of how complex your situation is, and thus the "algorithm" targets more complex situations - change in your marital situation, having adult children (which may or may not need to be taken into account when applying for benefits depending on a bazillion variables), and so on, increases your probability to be targeted.

[0] https://www.ecologie.gouv.fr/sites/default/files/Brochure-ba...

[1] https://www.ccomptes.fr/fr/publications/certification-des-co...


As a french person, I don't get why no politician ever talks about simplifying those things. It sounds so easy and such a quick win, leading to more visibility on the budget, and people getting easily access to their due money. But I think I know the answer: current government doesn't want to make welfare easy to access, they want to actually deter people to use it unless they absolutely need it.


If France is anything like the Netherlands, the benefits system isn't used by a few poor's, it's used by the majority of the country, instead of adapting tax codes. Simplifying it is going to (unintentionally and intentionally) hurt some groups, and nobody what's to be on the hook for it, despite it having been an increasingly large election theme.

France at least has a culture of constitutional reboots.


> current government doesn't want to make welfare easy to access, they want to actually deter people to use it unless they absolutely need it.

The more complex a system, the more skill and resources (ability, time, finances) it takes to navigate it.

Eventually, complexity serves to exclude everyone but people who are able to make a career out of pursuing benefits - who are also more likely to be fraudsters.


some cabinets have had a "ministère de la simplification"


It's quite simple I think:

1. for any given budget, there is no Pareto-improving reallocation: if you want to give more money to someone, then the money has to come from someone else.

2. given the current complexity, there are a _lot_ of edge cases to account for. If you do not want to make any loser after a reform, and not have as many edge cases, then you'd need to pump a lot of money to a lot of people so that "edge case people" who become "average joe" do not lose out. See for instance the people who end up with less disposable income when their pension is raised (https://www.alternatives-economiques.fr/vrais-faux-gagnants-...) (!). See also the riffraff about the "montant net social" - now the exact income (which is basically net salary + a bunch of things your employer pay for you and are counted as income) you need to report on welfare application is written on pay slip. Nice simplification here, right? People who reported wrong income (only net salary generally) were upset that it was a plot to decrease welfare.

3. people genuinely genuinely love special cases. Hence the tradeoff for the government between adressing a special case but adding more complexity always end up with more special cases, more complexity.

Some examples: since housing is expensive, people want to help renters with cash payment (of course it can't be bundled with the basic income, it has to be its own benefit), but they also want some public housing with below market rents. Now you need to acocunt for the in kind benefit of having a below market rent in the rules of housing benefits if you want to be relatively just between those two populations.

Recently, the cash benefit for handicaped people computation was changed - it depends only on the receiver's income and not on household's income (the main argument was that household income as an input makes handicapped less autonomous on one hand, and decreases working incentives for them). Now this means that some benefits are computed at the individual level, other at the household level. Of course there is a transitory period where people can be grandfathered-in the old rules so as not to make any losers.

And so on and so forth.

Large families need their own benefits, because they have unique(tm) needs, you just can't make a per child benefit that just scale.

And so on and so forth.

Did I mention that you want to help overseas territories with special fiscal rules?

And so on and so forth.

I think the two most prominent examples of this were the two failed Macron reform: the first pension reform (universal public pension fund instead of several) and the basic income (revenu universel d'activité - basically a merger of APL+RSA+PA at least). Always a special category that lose out if they own complexity-inducing special case is ironed out.

4. because of points 1-4, no one understands anything and thus there is a strong suspicision that the government is here to rob you of [your benefits | your pension | etc] when there is a reform proposal.


Same issue as with tax fraud. Governments prefers dealing with mistakes and small frauds they are cheap fast success, simple to detect, no lawyer to fight back, but the opposite for large frauds.

This was explain to me by an accountant when I wondered why they wanted to have me fix what felt like I significant errors, mean while a big corporation was in the news for what was clearly tax fraud but the government was dragging they feet about it.


It's also, generally, a complexity problem.

Corporate accounting is more complicated than family accounting. Even if the corporation isn't trying to do anything complicated!

Consequently, there are more edge cases and grey areas. As an accountant friend said to me, it's more like law than science -- knowing lots of laws and regulations, plus history, and deciding how to mostly correctly classify various things.

So something like trying to write the most efficient assembly algorithm possible, while Congress is modifying the ISA every year.

(Which isn't to say that loopholes don't exist, corporations don't abuse them, or corporate tax attorneys don't delay enforcement actions... but is to say that even in best case, family accounting is much simpler than corporate)


> It's also, generally, a complexity problem.

I don't think so in 2023. I worked several years in a tax agency and it was mainly a problem of "motivation". I have a friend who pursued "data warehouse" for 30 years there... nowadays you can crunch all the information and find patterns . I would even suggest that tax agencies should anonymize data and create data bounties to help them. In the same way DARPA creates cyberchallenges [1].

[1] https://www.darpa.mil/about-us/timeline/cyber-grand-challeng...


How much of internal accounting state leaks into tax filings?

I assume when you're looking at forensic tax accounting, you're identifying present-year vs previous-year discrepancies?

Or is there enough required in filings to generate something like a complete shadow accounting for a company?


There are relationships between agencies and corporations to link information. It is not only your filings what is at stake.


I think it'd be impossible to anonymize data in such a way that it's still useful but not easily identifiable with public or partial private information.


> The problem is that the CNAF deliberately targets small unintentional errors rather than large-scale intentional fraud.

I think your observation is the greatest one: you don't target "Madoffs or SBFs" you just look at the low hanging fruit where you look at simple probabilistic causality A => B instead of, or at the same time, targetting big malicioous actors. Big and/or complex crimes and corruption are protected.

The irony is that many times the crimes commited by big actors could be simpler to analyze based on tax and financial information.


Related: The AI Incident Database [1]

> The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

[1] https://incidentdatabase.ai/


Are they taking this approach only in the negative form?

Ie; if they detect you’ve made a mistake an and not claimed something you are eligible for. Or paid some tax that you are actually eligible to deduct/offset somehow, does it notify or automate the process of fixing that error?

I’m not opposed to the idea of automate to ensure compliance with the law / regulations. But enforcement should go both ways. The goal of automation of policy should be for the policy to be maximally effective. It shouldn’t just penalise those who incorrectly claim, but rather maximise eligible claims. And by that same measure, ensure those ineligible are rejected.

Obviously without any penalities, this would lead to over claiming, and reliance on the system to reject. But I think the system should be resilient enough to handle that.

While it’s not to such an extreme extent, there is a polar opposite attitude / policy towards entitlement of government benefits in Australia and New Zealand. In Australia you may be eligible for something, but it’s your responsibility to know that and claim it. And the government takes a ‘we defend the benefits from those who are ineligible’ attitude (more so if the Liberal party is in power, see Robo Debt scandal).

In New Zealand they take a ‘these benefits must go to those who are eligible’ attitude. And they model the extent to which they fulfil that objective based on how many claim vs how many their model states are eligible.

The outcome is that in NZ they call after you’ve had a baby to ensure you’re receiving additional tax benefits, services, etc. In Australia you’d only receive a call / letter to notify your benefits are being cut off, and there’s some difficult->impossible method of re-applying. Such as contacting a call centre that’s queue is full by 9:30 and stops accepting calls…


The purpose of a system is what it does.


Kazuaki Morita was a programmer on Mario Bros., Zelda NES, Zelda SNES, Zelda Game Boy, Zelda 64, where he did the bosses and fishing game; among many others.

He was often assigned the most challenging programming tasks. His coding style is so specific that it even shows in disassembled or decompiled code ("Ha, this file is classic Morita").


Can you elaborate on Morita's coding style?


That is a resume.

I wonder how he feels about modern Nintendo. I feel like I'm second hand embarrassed for them.


Don't be.

I feel Nintendo is doing fine. "Their" biggest letdowns are the modern pokemon games, but those are somewhat outsourced.


… why? Nintendo’s arguably at its peak in a long time


I did this with every first committer to https://github.com/zladx/LADX-Disassembly : giving commit rights immediately (so that they can merge their first PR themselves).

I did wonders to foster a community of contributors, and get more patches coming. The CI ensures nothing breaks, and there never was any trust incident.


Author here! Most of the engine documentation was figured out by many romhackers over the years; I merely started the efforts to write a comprehensive overview.

If you're interested in projects taking advantage of this knowledge, the Daid's LADX Randomizer [0] is awesome. It includes a ton of assembly tweaks that makes the engine more flexible, and add many options to the game.

And Link’s Awakening Redux [1] is a good example of life-improvement game tweaks made using the disassembly.

[0] LADXR: https://daid.github.io/LADXR/ [1] Link’s Awakening Redux: https://github.com/ShadowOne333/Links-Awakening-Redux


Nice. I wonder if they'd be willing to fix the bug where pressing left and right on the d-pad at the same time renders Link's sprite invisible. Not an issue on original hardware but quite dangerous when emulating.


This reminds me vaguely of the Sega Dreamcast controllers.

If I remember correctly, there is a hardware and software difference between pushing up on the joystick. Edit: Or rather some games depended on some analog noise(?) that is now filtered out by modern interfaces.

It doesn’t represent `{ x: 0, y: 1 }’, but I can’t remember if that was the case because of the physical joystick, which was then remapped in software, or some other case, though I assume that’s why.[1][2]

[1]: https://github.com/p1pkin/demul/issues/392

[2]: https://github.com/flyinghead/flycast/issues/287


Given that original hardware prevented that, I’m surprised it wouldn’t be a default-on option in emulator frontends.


It depends on the emulator, across pretty much every platform that uses a joystick or d-pad, but in my experience the default most commonly is no left+right input at the same time. (It's semantics as to whether the implementation is "allow left+right input" defaulted to off, or "prevent left+right input" defaulted to on.)


I'd also be interested to know what ends up happening if there are simultaneous presses— I assume last-one-wins would be the least surprising strategy for resolving it, but certainly several options are possible, and what is preferred might even be game-dependent.


Oh that’s neat. I’m struggling to think of bug prevention that’s similar on a PC. Orientation lock maybe?


The original hardware prevented it in the sense it's impossible to push both the left and right button at the same time on the Gameboy dpad, not through any software means.


Yes, I understood that it was "hardware" as in a fundamental constraint of the physical controller, but appreciate the clarification.

Certainly if it had been enforced by the console firmware, then it would be a no-brainer that emulators should do it.

Funny story about this actually is that a while ago I was retrofitting an old dance mat to be an XInput device via a Teensy, and I ran into a constraint where XInput's d-pad support only allowed one direction to be set at once, which obviously doesn't work for that use case. I ended up giving up and having the Teensy emulate a USB keyboard instead. (This was for use with Stepmania and Necrodancer on PC.)


Games of that era tended to hit the hardware directly for I/O, meaning that there is no firmware that can enforce any such thing. AFAIK the Game Boy boot ROM just verifies the Nintendo logo data, plays the chime, then boots the game, and after the game boots, the boot ROM completely disappears from the memory map.


Nintendo held onto this no os approach into the Wii era. Hence if you ran a japanese version game on a north American Wii you got the japanese Wii pause menu.

Meanwhile the PS3 and Xbox 360 had hypervisors.


Well, the Wii did have an OS, that would receive updates over time. IOS was managed sort of shitty, though. Since older games would be linked with specific versions of IOS, they had to literally keep multiple versions of the OS around on NAND. They would occasionally stub out specific slots (usually those used by the System Menu, or by internal-only discs, or even slots they never used but that pirates did use).

Agreed, however, that all PPC code running during a game came from code on the disc, even "OS" things like the Home menu. The IOS code largely gatekept the advanced Wii features as compared to the GC


Yeah, I remember that too— each game was shipping its own instance of the pause menu, so they would have subtly different behaviour, including around weird things like what the light on the disk drive did.

I imagine this kind of thing is also why the backcompat story for playing Wii games on a Wii U is so janky— you have to access them through the "vWii" environment, which I guess is basically just a kind of VM, so that it can provide a virtualized instance of the hardware that the game ROMs/discs are expecting to talk to.


I've never played this game on an emulator. That's pretty interesting. I need to find a ROM now.


What was censored and what was the untranslated present?


Instead of losing her bikini top (which Link must then find), the mermaid loses her necklace. Instead of pulling a towel to cover herself when Link enters the room, the hippo lady being painted is edited to have no nude characteristics.



I've been working on a project on top of LADXR, many thanks to all involved.


To find someone on Mastodon, I would either enter their nickname in the search bar (no need for the domain, it will search on all instances) – or simply click on their profile on a post I like. Just like a centralized network. This tool just exists to do this automatically and in bulk.

It seems to me that finding someone on a federation is as easy as on centralized systems. Do you have a use case in mind where centralization makes it easier to find someone?


> To find someone on Mastodon, I would either enter their nickname in the search bar (no need for the domain, it will search on all instances)

Nope, It will only show accounts followed by local accounts.

Mastodon doesn't do any kind of per server search, And your "federated feed" is basically accounts on other instances followed by local accounts.

This makes hosting a private Mastodon server, for personal or family use, incredibly hard as discovery is basically impossible.

This can be fixed by using Activity pub relays, but guess what? Mastodon's official servers don't use them, which are the servers that really matter.

Its a big part of why I'm hesitant to use Activitypub, as I believe discovery is a big part of modern social media, and Mastodon absolutely sucks at it.

This is coming from someone hosting his own Matrix server.


it will not search "all instances".

It will search a subset of instances.

How do we know this to be true?

a) there is no central listing of all mastodon instances, or even all public ones.

b) it would take FREAKING AGES TO COME BACK because there are so damn many, you can only make so many parallel requests at a time, and you have to process the results from all of them.


Can you expand on b)? I'm not sure I understand why this would be difficult. If you already had a list of instances, as it seems like asynchronous clients could probably handle the scale of instances (source below list about 4k, which isn't out of scope for simultaneous async tasks), and processing the results is probably fairly simple. Especially if you were to cache that list and just do an updated request every few minutes (so not a new full-network request set every query) it seems like this wouldn't be too crazy complicated. Have I missed something?

[1] https://fediverse.party/en/mastodon/


I used Scalingo for Rails apps a while ago, and was quite impressed by the quality of the platform. Very Heroku-like. Maybe not as polished, but still doing the PaaS job pretty well (git push, and you're done).


I was expecting someone to post a comment along the lines of "Pixelmator is great for picture editing on macOS"; it is one of the best Mac editors.

But I'm also delighted to see your username pop up :) I used quite a few of your wallpapers back in 2002, when I was making desktop customization packs for Windows XP, and I have fond memories of them.


thank you very much :-) those were the times!


Goddamn what a small world, thanks for the memories Vlady. I was just a wee lad when I found your stuff way back.


I'm still drawing and publishing :-) although not as often, I must admit. Btw I do draw in Pixelmator Pro now, it fits my drawing habits really well.


Vlad your work circa ~2006 inspired me to go to design school (which I failed out of and went into computer science instead haha). So thank you :)


I'm glad I've discovered you and your artwork thanks to these comments :)


I like nothing more than dropping into a new project, and, rather than figuring out the commands to make it run, see that it has a Makefile with a handful of targets for common tasks.

With some collegues we wrote an article about the benefits of this approach a few years ago: https://blog.capitaines.fr/2016/09/30/standardizing-interfac...


And it still holds true for development today! Really any organization that has multiple languages should consider making their devs lives easier with Make.

And of course, it plays great with Docker & Compose, here's a write-up we did on using Make with container tooling: https://shipyard.build/blog/makefiles-for-modern-development...


I could not agree more. However most people are on windows for which obtaining gnu make is painful, or maybe even impossible - at least the path handling has many sharp edges.


I don't comprehend how people can develop on windows (without linux subsystem).

I have to do that every once and then (to ship the occasional C++ or Rust build on Windows) and it's the stuff of nightmare. Stuff breaking randomly from one day to another, env variables to be set in weird ways, GUI installers, 8 different versions of mingw or similar. Recently I've seen that there are a few package managers now (I used chocolatey and at least 2 others just trying to get something to compile) but still, compiling something trivial is always an adventure.

Mac OS X is kind of okay. Brew is barely decent and things mostly work (unless you discover you need to install 12GB of XCode for some dependency or your script is expecting coreutils instead of bsd).

Every linux distro comes with a package manager and compiling is trivial


Homebrew is a subpar choice on macOS; MacPorts is faster, has more packages, and is implemented more correctly than Brew.

Additionally, MacPorts was co-created by an engineer who also created the original FreeBSD Ports system, and thus hews much more closely to standard UNIX/BSD practices.

I’m not sure how and when Homebrew became the standard, but it is definitively worse.


I came to realise this too.

Using Homebrew and multiple users is excruciating and an eye opener on how system-level software should really be installed.

Homebrew insists on avoiding root privileges whilst also installing packages system-wide. That works fine and is invisible with one user but falls down hard otherwise.

Their documentation is incorrect too, saying that this is all fine because “we install in /usr/local/bin”. It’s not easy to change this.

The solution was to embrace MacPorts which correctly requires root privileges to install system-wide packages.

I haven’t looked back since. I haven’t missed brew or any software that’s available on brew alone.


Too bad when I want to install newer fancy tools on Linux, it's easier to unify the environment with Homebrew instead of using Homebrew only on Linux (which has a very weird quirk that it wants to install in /home/linuxbrew/.linuxbrew (or else it'll start compiling so many packages instead rolling binary packages) instead of /opt/homebrew like any sane decision would've been.)

I also don't understand Nix when it wants to make 30 users for build process and a few unintuitive decisions. Otherwise it's good that it works same on macOS and Linux.


For me compiling usually is something like `cargo build`, `ng build`.

I remember having problems with libs that require installing & registering a library somewhere such that CMake can find it. However, I distanced myself a bit from C(++), so that doesn't really happen anymore :)

I avoid mingw, don't use any package manager besides Windows Store (if you want to call that a package manager).

Can't complain. Sometimes, there is stuff that simply doesn't support Windows -> WSL. When there is docker, it doesn't matter anyways...

My strategy is don't fight Windows and you'll be happy


From your description it sounds like you might be going off the beaten track and hitting problems.

When I was doing C++ on Windows getting a dev environment setup just meant installing Visual Studio with an appropriate Windows SDK version (or the Windows SDK + build tools for a build system).

You can have multiple VS versions installed side-by-side. To get a terminal with environment variables set correctly you just need to use the shortcuts from your VS installation.

For third party dependencies we checked the headers and (pre-built) binaries into the repository. I don’t remember ever having more than a dozen or so in total. It was usually things like boost and zlib.

Having done that you can just point CMake directly at the packages rather than worrying about FindPackage.

Working in tools like Python and Node, personally I often miss the simplicity and stability of this approach.


> I don't comprehend how people can develop on windows

They want to distribute their app on Windows?


That’s why, not how. And you can cross-compile from Linux for some stacks.


It's easier to target Windows from Windows than from Linux.


It's usually not by choice. Medium to large enterprises often demand this so they can manage the employee hardware.


It depends a huge amount on the language and toolchain.

MSVS is really quite nice.


Indeed – although I guess WSL made unix tools slightly easier to use nowadays.

But even without executing the Makefile, simply reading it can tell new developers which language-specific command needs to be run to build the project (and then the command can be copy-pasted and run manually).


WSL made running a linux VM less hassle.


If you show up to the job with a windows box I expect you to know how to do the job in windows

It's the same problem with docker-compose files; how do you expect the developers not running in windows to fix your windows problems?


If you're already using the Chocolatey package manager for Windows (I do, and am mostly happy with it), it offers GNU make:

https://community.chocolatey.org/packages/make


You can put a copy of make in the repo.


For added stability, make sure it’s statically compiled (which it might be already...)


I've been using GNU Make 4.2.1 from here: https://github.com/mbuilov/gnumake-windows - depends only on kernel32.dll. I assume the latest 4.3 build there is the same.


I’m wondering if you could compile make to wasm and include it as a cross platform dev dependency?


Thank god WSL is around. Otherwise I would've given up on supporting Windows environment for those who still stick to Windows.


Are most people who would have a use for a tool like Make on Windows?


Well, most developers I work with benefit from a task runner. Now, we use different runners for different repos. I’d prefer being able to go into a repo and do «make test» and have it work, regardless of language, framework, purpose of the repo.


I have never bumped into that article before, thanks for that.

I did implement though that uniform interface in https://github.com/ysoftwareab/yplatform


I guess to encourage drunk people not to drive, which makes the roads safer for everyone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: