It's so important to remember that unlike code which can be reverted - most file system and application operations cannot.
There's no sandboxing snapshot in revision history, rollbacks, or anything.
I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
For those of us who have built real systems at low levels I think the alarm bells go off seeing a tool like this - particularly one targeted at non-technical users
Frequency vs. convenience will determine how big of a deal this is in practice.
Cars have plenty of horror stories associated with them, but convenience keeps most people happily driving everyday without a second thought.
Google can quarantine your life with an account ban, but plenty of people still use gmail for everything despite the stories.
So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.
Considering the ubiquity and necessity of driving cars is overwhelmingly a result of intentional policy choices irrespective of what people wanted or was good for the public interest... actually that's quite a decent analogy for integrated LLM assistants.
People will use AI because other options keep getting worse and because it keeps getting harder to avoid using it. I don't think it's fair to characterize that as convenience though, personally. Like with cars, many people will be well aware of the negative externalities, the risk of harm to themselves, and the lack of personal agency caused by this tool and still use it because avoiding it will become costly to their everyday life.
I think of convenience as something that is a "bonus" on top of normal life typically. Something that becomes mandatory to avoid being left out of society no longer counts.
I am a car enthusiast so don't think I'm off the deep end here, but I would definitely argue that people love their cars as a tool to work in the society we built with cars in mind. Most people aren't car enthusiasts, they're just driving to get to work, and if they could get to work for a $1 fare in 20 minutes on a clean, safe train they would probably do that instead.
> So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.
This is anecdotal but "people" care quite a lot in the energy sector. I've helped build our own AI Agent pool and roll it out to our employees. It's basically a librechat with our in-house models, where people can easily setup base instruction sets and name their AI's funny things, but are otherwise similar to using claude or chatgpt in a browser.
I'm not sure we're ever going to allow AI's access to filesystems, we barely allow people access to their own files as it is. Nothing that has happened in the past year has altered the way our C level view the security issues with AI in any other direction than being more restrictive. I imagine any business that cares about security (or is forced to care by leglislation) isn't looking at this as a they do cars. You'd have to be very unlucky (or lucky?) to shut down the entire power grid of Europe with a car. You could basically do it with a well placed AI attack.
Ironically, you could just hack the physical components which probably haven't had their firmware updated for 20 years. If you even need to hack it, because a lot of it frankly has build in backdoors. That's a different story that nobody on the C levels care about though.
That's what I am saying though. Anecdotes are the wrong thing to focus on, because if we just focused on anecdotes, we would all never leave our beds. People's choices are generally based on their personal experience, not really anecdotes online (although those can be totally crippling if you give in).
Car crashes are incredibly common and likewise automotive deaths. But our personal experience keeps us driving everyday, regardless of the stories.
Airbags, yes. But you can't just make it provably impossible for a car to crash into something and hurt/kill its occupants, other than not building it in the first place. Same with LLMs - you can't secure them like regular programs without destroying any utility they provide, because their power comes from the very thing that also makes them vulnerable.
Once upon a time, in the magical days of Windows 7, we had the Volume Shadow Copy Service (aka "Previous Versions") available by default, and it was so nice. I'm not using Windows anymore, and at least part of the reason is that it's just objectively less feature complete than it used to be 15 years ago.
Q: What would prevent them from using git style version control under the hood? User doesn’t have to understand git, Claude can use it for its own purposes.
Didn't actually check out the app, but some aspects of application state are hard to serialize, some operations are not reversible by the application. EG: sending an email. It doesn't seem naively trivial to accomplish this, for all apps.
So maybe on some apps, but "all" is a difficult thing.
Let's assume that you can. For disaster recovery, this is probably acceptable, but it's unacceptable for basically any other purpose. Reverting the whole state of the machine because the AI agent (a single tenant in what is effectively a multi-tenant system) did something thing incorrect is unacceptable. Managing undo/redo in a multiplayer environment is horrific.
Maybe not for very broad definitions of OS state, but for specific files/folders/filesystems, this is trivial with FS-level snapshots and copy-on-write.
Ok, you can "easily", but how quickly can you revert to a snapshot? I would guess creating a snapshot for each turn change with an LLM become too burdensome to allow you to iterate quickly.
Well there is cri-u for what its worth on linux which can atleast snapshot the state of an application and I suppose something must be similar available for filesystems as well
Also one can simply run a virtual machine which can do that but then the issue becomes in how apps from outside connect to vm inside
I wonder if in the long run this will lead to the ascent of NixOS. They seem perfect for each other: if you have git and/or a snapshotting filesystem, together with the entire system state being downstram of your .nix file, then go ahead and let the LLM make changes willy-nilly, you can always roll back to a known good version.
NixOS still isn't ready for this world, but if it becomes the natural counterpart to LLM OS tooling, maybe that will speed up development.
Git only works for text files. Everything else is a binary blob which, among other things, leads to merge conflicts, storage explosion, and slow git operations
Indeed there are and this is no rocket science. Like Word Documents offer a change history, deleted files go to the trash first, there are undo functions, TimeMachine on MacOs, similar features on Windows, even sandbox features.
I mean, I'm pretty sure it would be trivial to tell it to move files to the trash instead of deleting them. Honestly, I thought that on Windows and Mac, the default is to move files to the trash unless you explicitly say to permanently delete them.
Everything on a ZFS/BTRFS partition with snapshots every minute/hour/day? I suppose depending on what level of access the AI has it could wipe that too but seems like there's probably a way to make this work.
I guess it depends on what its goals at the time are. And access controls.
May just trash some extra files due to a fuzzy prompt, may go full psychotic and decide to self destruct while looping "I've been a bad Claude" and intentionally delete everything or the partitions to "limit the damage".
A "revert filesystem state to x time" button doesn't seem that hard to use. I'm imagining this as a potential near-term future product implementation, not a home-brewed DIY solution.
A filesystemt state in time is VERY complicated to use, if you are reverting the whole filesystem. A granular per-file revert should not be that complicated, but it needs to be surfaced easily in the UI and people need to know aout it (in the case of Cowork I would expect the agent to use it as part of its job, so transparent to the user)
I would never use what is proposed by OP. But, in any case, Linux on ZFS that is automatically snapshotted every minute might be (part of) a solution to this dilemma.
IIUC, this is a preview for Claude Max subscribers - I'm not sure we'll find many teachers or students there (unless institutions are offering Max-level enterprise/team subscriptions to such groups). I speculate that most of those who will bother to try this out will be software engineering people. And perhaps they will strengthen this after enough feedback and use cases?
In theory the risk is immense and incalculable, but in practice I've never found any real danger. I've run wide open powershell with an OAI agent and just walked away for a few hours. It's a bit of a rush at first but then you realize it's never going to do anything crazy.
The base model itself is biased away from actions that would lead to large scale destruction. Compound over time and you probably never get anywhere too scary.
I hope we see further exploration into immutable/versioned filesystems and databases where we can really let these things go nuts, commit the parts we want to keep, and revert the rest for the next iteration.
Most of these files are binary and are not a good fit for git’s graph based diff tracker…you’re basically ending up with a new full sized binary for every file version. It works from a version perspective, but is very inefficient and not what git was built for.
It works on Linux, Windows, macOS, and BSD. It's not locked to Apple's ecosystem. You can back up directly to local storage, SFTP, S3, Backblaze B2, Azure, Google Cloud, and more. Time Machine is largely limited to local drives or network shares. Restic deduplicates at the chunk level across all snapshots, often achieving better space efficiency than Time Machine's hardlink-based approach. All data is encrypted client-side before leaving your machine. Time Machine encryption is optional. Restic supports append-only mode for protection against ransomware or accidental deletion. It also has a built-in check command to check integrity.
Time Machine has a reputation for silent failures and corruption issues that have frustrated users for years. Network backups (to NAS devices) use sparse bundle disk images that are notoriously fragile. A dropped connection mid-backup can corrupt the entire backup history, not just the current snapshot. https://www.google.com/search?q=time+machine+corruption+spar...
Time Machine sometimes decides a backup is corrupted and demands you start fresh, losing all history. Backups can stop working without obvious notification, leaving users thinking they're protected when they're not. https://www.reddit.com/r/synology/comments/11cod08/apple_tim...
Restic is fantastic. And restic is complicated for someone who is not technical.
So there is a need to have something that works, even not in an optimal way, that saves people data.
Are you saying that Time Machine doe snot backup data correctly? But then there are other services that do.
Restic is not for the everyday Joe.
And to your point about "ignorant people" - it is as I was saying that you are an ignorant person because you do not create your own medicine, or produce your own electricity, or paint your own paintings, or build your own car. For a biochemist specializing in pharma (or Walt in Breaking Bad :)) you are an ignorant person unable to do the basic stuff: synthetizing paracetamol. It is a piece of cake.
Yes, and I think we're already seeing that in the general trend of recent linux work toward atomic updates. [bootc](https://developers.redhat.com/articles/2024/09/24/bootc-gett...) based images are getting a ton of traction. [universal blue](https://universal-blue.org/) is probably a better brochure example of how bootc can make systems more resilient without needing to move to declarative nix for the entire system like you do in NixOS. Every "upgrade" is a container deployment, and you can roll back or forward to new images at any time. Parts of the filesystem aren't writeable (which pisses people off who don't understand the benefit) but the advantages for security (isolating more stuff to user space by necessity) and stability (wedged upgrades are almost always recoverable) are totally worth it.
On the user side, I could easily see [systemd-homed](https://fedoramagazine.org/unlocking-the-future-of-user-mana...) evolving into a system that allows snapshotting/roll forward/roll back on encrypted backups of your home dir that can be mounted using systemd-homed to interface with the system for UID/GID etc.
These are just two projects that I happen to be interested in at the moment - there's a pretty big groundswell in Linux atm toward a model that resembles (and honestly even exceeds) what NixOS does in terms of recoverability on upgrade.
Or rather ZFS/BTRFS/BchachFS. Before doing anything big I make snapshot, saved me recently when a huge Immich import created a mess, `zfs rollback /home/me@2026-01-12`... And it's like nothing ever happened.
Somewhat related is a concern I have in general as things get more "agentic" and related to the prompt injection concerns; without something like legally bullet-proof contracts, aren't we moving into territory of basically "employing" what could basically be "spies" at all levels from personal (i.e., AI company staff having access to your personal data/prompts/chats) to business/corporate espionage, to domestic and international state level actors who would also love to know what you are working on and what you are thinking/chatting about and maybe what your mental health challenges are that you are working through with an AI chat therapist.
I am not even certain if this issue can be solved since you are sending your prompts and activities to "someone else's computer", but I suspect if it is overlooked or hand-waved as insignificant, there will be a time when open, local models will become useful enough to allow most to jettison cloud AI providers.
I don't know about everyone else, but I am not at all confident in allowing access and sending my data to some AI company that may just do a rug pull once they have an actual virtual version of your mind in a kind of AI replication.
I'll just leave it at that point and not even go into the ramifications of that, e.g., "cybercrimes" being committed by "you", which is really the AI impersonator built based on everything you have told it and provide access to.
>>I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
I do believe the approach Apple is taking is the right way when it comes to user facing AI.
You need to reduce AI to being an appliance that does one or at most a few things perfectly right without many controls with unexpected consequences.
Real fun is robots. Not sure no one is hurrying up on that end.
>>Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
Also in my experience this creates all kinds of other issues. Like going back up a tree creates all kinds of confusions and keeps the system inconsistent with regards to whatever else it is you are doing.
You are right in your analysis that many people are going to end up with totally broken systems
There was a couple of posts here on hacker news praising agents because, it seems, they are really good at being a sysadmin.
You don't need to be a non-technical user to be utterly fucked by AI.
Theoretically, the power drill you're using can spontaneously explode, too. It's very unlikely, but possible - and then it's much more likely you'll hurt yourself or destroy your work if you aren't being careful and didn't set your work environment right.
The key for using AI for sysadmin is the same as with operating a power drill: pay at least minimum attention, and arrange things so in the event of a problem, you can easily recover from the damage.
I assumed we are talking about IT professionals using tools like claude here? But even for normal people it's not really hard if they manage to leave the cage in their head behind that is ms windows.
My father is 77 now and only started using computer abover age 60, never touched windows thanks to me, and has absolutely no problems using (and administrating at this point) it all by himself
I thought that too but they're surprisingly fast. I tracked a dot across the Atlantic (US East Coast to UK) and it took around 4-5 days, which is about right.
There's a very nice effect where if you zoom in, time slows.
Electricity, water, roads, bridges are all public infrastructure. Why should payments be any different.
Without summoning the decentralized block-based "currency" crowd, I would like to point out that in the entire lifespan of such technologies they never have received widespread institutional or legislative buy-in like this EU initiative to build a digital Euro.
While USDC and BTC have been used as defacto currencies in some countries there is truly no equivalent adoption in any meaningfully mature economic zone such as EU/NA/CN.
Absolutely agree. The idea that private corporation manage our digital payments is crazy if you ever imagine that happening to physical payments. Imagine if bank of america got to decide if the dollar bill your trying to use is too damaged. That should be between me, the recipient, and a public body
> Why should water be public infra but food is not?
The main reason why infrastructure of any kind (water, sewage, etc) is a public infrastructure - even in largely privatized economies - is that infrastructure is essentially a natural monopoly. Food on the other hand isn't and it can largely be traded as a commodity (which is, at least in my opinion, a major reason why our food system is so broken).
If your only expectation is that it provides enough calories for your population, you are absolutely right. If you have a look at the bigger picture, the issues are plentiful. On the producer side, farmers are operating at relatively thin margins which encourages consolidation and unsustainable farming practices. This in turn leads to extensive soil degradation and fertilizer use, which is unsustainable - both financially and ecologically.
On the consumer side, people are becoming more overweight (which cannot be exclusively be attributed to the food system, but diet of course plays a significant role). Food is becoming more expensive and lower quality. Food waste also still is a major problem.
Many issues are shared between the US and the European food system, although they may not be as extreme as in the US. However, it does not feel like there is actual political will to steer the ship in a different direction.
>Why should water be public infra but food is not?
The water pipes are public infra. They pump it into your house.
The more people that use the same system, the cheaper it can get. Drawing competing systems of water pipes to houses to let companies compete would simply drive up the cost for everyone.
Same with electricity, gas lines, sewage...
Water itself is not publicly owned. You can buy water in the store like food.
We have the same with fiber to the home in France. When a company lays the fiber, they have to allow others to use it too (they are paid for that service, but the amont is regulated).
When fiber arrives, there is a 3 months waiting period before any company can provide the service. This is intended to give enough time for everyone to prepare an offer.
When I got mine, I called the regulation authority (Arcep) who gave me the date and hour my line opens. On that day I called my preferred provider who told me "it is coming! we will call you!" and then the one who laid the cable who told me "we can come in 2 days". I chose the latter.
A few years later the preferred one finally made its way to my area...
You should share your answer instead of posing a rhetorical question because the answer isn’t obvious at all and ranges across a large variety of options including food should be infrastructure.
Just to add to the conversation, personally I back and forth a bit on which things should be public or private owned, farms especially.
My general reasoning is that when a "best" solution is known, monopolies tend to form; monopolies tend to extract as much economic rent as possible; I'd rather economic rents be extracted by a government for the purpose of national benefit rather than personal enrichment.
Conversely, when there is no known "best" solution, a free market allows a range of entrepreneurs the opportunity to try their ideas in the hope of cornering the market.
I think water is probably in the first, but with a caveat that this is a high-level thing and it's fine to have hundreds of different companies trying to figure out the best ways to make and install municipal pipes, that work is contracted to by local governments.
Food? I dunno, weather is still a massive dice roll for farming output. Perhaps nationalisation would work, perhaps subsidies, price regulations for inputs and floors for outputs, are the least wrong way to do this. But I'm extremely uncertain.
I think most governments do recognise food as both "strategic infrastructure" and absolutely vital to their re-election chances.
European governments govern food supply with cash subsidies to farmers, land use rules helping farmers, special immigration rules for agricultural labourers, special extra-low inheritance taxes for farmers, special subsidies for things like having hedges between fields, special low-tax fuel for farm equipment, different tax rates for different foodstuffs (bread vs cake vs wine vs beer), provision of super cheap water for irrigation, minimum price guarantees with governments buying up over-produced products, special border controls for fresh goods that can't be held up, special border controls for live animals, entire government departments for things like monitoring and controlling the spread of animal disease, rules on precisely what chemicals can be used, rules about things like chemical run-off into waterways, rules about animal welfare, rules about slaughterhouse conditions, rules about package materials, package sizes, package labels, rules about how much pork must be in a sausage for it to be called a pork sausage, rules about who can call their product 'champagne' or 'parmesan'.
If the payments industry was regulated like the food industry, it would be more regulated, not less.
Food from multiple suppliers is easy to put on trucks going to multiple shops using same roads. Road are workable shared access medium. Water really is not. Unless you deliver it in tanker trucks using roads.
Electricity itself is fungible in moment, so electricity can used shared access medium of grid. But similarly it makes little sense to have multiple roads in densely build areas. So both roads and water pipes end up as natural monopolies in build up areas.
Food should be public infrastructure, and the subsidies every country gives to farmers are a good indication of that.
Not all food but produce, bread, milk, infant foods, flour, rice and other cereals being sorta price controlled the way water/electricity is on most places would benefit mostly everyone
Medicine should count too. And a lot of other things that we often realize to be essential only in a crisis situation. But I'm sure GP didn't intend their list to be exclusive.
Electricity, water, roads and bridges are natural monopolies.
Payments aren't and there is no reason for the State to monopolize it. Especially given the EU poor track record on fostering innovation. The EU bureaucrats will "regulate" it to the bone, increasing compliance costs for processors and mass surveillance. We'll be back to the start.
In an age where card payments are ubiquitous, being suddenly cut off from VISA/MasterCard networks can severely disrupt a country's economy. Especially if it heavily relies on tourism.
The EU prevents sellers to surcharge depending on the type of card used (PSD2 Directive (EU) 2015/2366), Art. 62(4)). It in effect opened the door to a Visa/Mastercard duopoly, as no local competitor could emerge and compete on price.
When it comes to tourism, this problem will always happen if the tourists are coming from the side that's cutting off the other. Without an interface between European and American payment systems, Americans won't be able to pay in Europe.
It's way more severe than that since Americans are not the only ones relying on VISA/MasterCard for payments abroad. The presence of other payment systems would make it easier for any non-USA country to still do payments.
- The EU is not a state, it's a governance body composed of representatives from individual member states. Every state is responsible for implementing their take on the directive.
- "EU poor track record on fostering innovation" - many things you use online have been researched and conceptualised in the EU. Even if they go elsewhere for funding, don't mistake where "innovation" happens and where it gets packaged by VC money for sale and enschitification.
- compliance costs: I think that's only expensive for companies who intend to to sell or otherwise do something shady with user data. Remember, not collecting data makes you instantly compliant with zero cost.
It's a supranational institution that dictates laws to States, with a budget and coercive powers against States. It just lacks an army of its own. Whether it's a proper state or not doesn't matter.
> conceptualised in the EU
No, in Europe. No EU bureaucrat conceptualizes things. EU =\= Europe.
> I think that's only expensive for companies who intend to to sell or otherwise do something shady with user data. Remember, not collecting data makes you instantly compliant with zero cost.
A lot of businesses need consumer data to improve their offerings and be competitive against the big boys. And RGPD lawyers ain't cheap, so even if you keep the minimal amount of data, you have to fork €10k+ to review everything, etc. The requirements for AI are even worse. All of those compliance jobs are unproductive and a burden on EU companies.
Same for the tax compliance obligations, which are ever increasing and now require you to record and document everything, especially if you do cross-border operations, as you are considered guilty by default if you don't.
We could also talk about the requirements to audit your sourcing chain for "human rights abuses", which ends in compliance hell for industrial companies with 2k+ suppliers, while of course Chinese companies don't have this problem.
The EU doesn't do any cost/benefit analysis on this, and just suppose that companies will magically find ressources to deal with their new regulatory "innovations".
Ok, I think you're reading a lot of tabloids as none of the above are actually true.
Regarding
> Institution that dictates laws to States
again no, because the laws are created and voted by elected representatives of said states, so the EU is not some 3rd party that exists on the side, the EU is the countries within it. Member states create their own laws.
Please point what is false. Does the EU, as an institution, produces technology? No Europeans do, did it before the EU, and don't need the EU to do it. WWW was created in Switzerland by an English scientist. Not in a ugly "bureaucrat-grey" building in Brussels.
What you are saying here is false: EU regulations are directly applicable, and don't need to be transposed into local laws.
It's the case for directives, which are required to be incorporated within 2 years. If the State doesn't comply, it faces an infringement procedure (Article 258 TFEU:), sanctions and fines (Article 260 TFEU).
The EU itself is not a real democracy, given that at every step obscurity and backroom-dealing is preferred to transparency. Chat Control is an excellent example of it: it was recommended by officials whose names were redacted. Even when the European Parliament said no in the past, they try to push it again - those fools can't anyway vote a law preventing the EC to do it!
More formally, the only directly elected institution (the parliament) doesn't have initiative power, doesn't hold the pen during trilogue negociations, is highly corruptible[0] given the proportional election, and can just "accept" the head of the European Commission. The EC is the only permanent body and can arbitrage rotating presidencies, pressure parties, and do screening on MPs to get what they want.
The lack of judicial consequences regarding Van Der Leyen after the sms affair is quite appalling for a commission that yells about "compliance" every day of the year.
The alternative could be to foster competition to allow private local actors to emerge. Why do we need the State for this? The EU prevents sellers from surcharging depending on the type of credit card used, which led to the Visa/Mc duopoly and prevented local alternatives to compete on costs.
How many non-government physical currencies are there in your country? I'm guessing if there is a legitimate government electronic currency most alternatives will fade away which will be the best rebuttal to your argument.
The public can hold government rules for surveillance in check, whereas they don't have that option for private payment systems.
Payments aren't but issuing currency is. State doesn't monopolize the payments. State creates a regulation and common standard across the EU. It's the banks who would do the payments, not the state, and for that banks would receive a standard fee so you as a consumer always know how much you pay for the service.
There is already. Almost every country on earth has single state controlled currency.
Fiat currency is already a natural monopoly on payments.
Imagine if every time you wanted to pay for a train ride you had to put your money into an envelope, mail it to the United States, and wait for it to come back. That's VISA.
I also do technical diligence, often times the blocker to sharding is the target company's tenancy model and schema. PK data type is certainly a blocker.
Definitely an issue, rarely the main one. You can work around integer PKs with composite keys or offset-based sharding schemes. What you can't easily fix is a schema with cross-tenant foreign keys, shared lookup tables, or a tenancy model that wasn't designed for data isolation from day one. Those are architectural decisions that require months of migration work.
UUIDs buy you flexibility, sure. But if your data model assumes everything lives in one database, the PK type is a sub-probem of your problems.
Hmm, yea that's a great callout. Something we definitely have in our sights longer term (focus for now is to make sure that the desktop chat experience is truly amazing).
But why one or the other? Don't get me wrong, I appreciate a curated list of suggestions, but it would really be useful to have some tips or comments on the experience of each one, their shortcomings or advantages. Otherwise, it's not much better than just checking out a list of names from Google :)
I will use hledger if I'm handling someone else's money, like as a trustee. Double entry accounting is nice for being precise about things. But for my own accounts it's too much overhead to deal with reconciliation. Don't have time for that.
A lot of it is going to be needs & vibes based. Some of them have more in-depth and niche features in certain areas, like transaction splitting or categorization and others are just simple and clean UI to go for ease of use.
I use monarch, I don't think it is very good as an 'investment tracker' (what wealthfolio claims to be). It's fantastic as a more general personal finance/budget tracker.
For example - I have to reclassify loads of transactions for it to track close to correctly. Say treasuries - purchase at a discount, then at maturity redeem for full amount. You can enter them as buy/sell, but then it wont properly report to cashflow, or give you a good classification as to what type of income that was.
Similar with stocks and short term/long term. It means that even though all the info is there it's not as useful as I'd like for showing total income broken out as types of income to help with tax planning etc.
I still use it so the annoyances are not too extreme, but if there were a tool that did a better job of the investment side I'd switch.
Looking at the wealthfolio features I'm not sure it handles any of that any better though, but it does seem to break out dividend/interest income.
I use monarch and I've been happy enough with it. Would probably consider self-hosting with actual in the future, but I wanted an easy on-ramp for myself to actually get in the habit of budgeting.
We're entering the same market but with a tilt towards investment & actionable guidance. Same read-only capabilities on the account sync side (although our budgeting + spending side is still heavily in development) except we're an RIA that can provide professional advise (for free).
I still use GNUcash [1]. Only drawback is comparatively poor handling of equities, with no good way to view historic portfolio value / net worth. Great for general purpose accounting though.
As I recall, it doesn't incorporate the historic price of stocks. So if I bought 1 share of Nvidia for $10 10 years ago, it'd say I had a net worth of $180 then, not $10 (as it uses today's stock price).
It's not perfect, for example its monthly/yearly subscription detection didn't work great for me, but compared to all those apps that involve trusting a third party with your banking data it's worth a look.
beancount + the web ui for it, fava, is what I end up going back to whenever I look for the sort of tools. Downside is I'm way behind on my ledger and don't _really_ want to spend the effort inputting everything to catch up.
Which you appear to be the developer of from other comments in this thread. Not saying it's bad, but it's self-promotion rather than organic preference.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.
You missed https://tiller.com which uses the same financial connectors as others but dumps the data into a Google/Office365 spreadsheet that you control.
I'm a huge fan of You Need A Budget, it was instrumental in giving me control over my finances. It feels like a superpower to see all my money in one place and not care which bank account the dollars actually reside. Also makes it easier to take advantage of various offers (Credit card or things like HYSA) since I know all the records will live in YNAB and I have full control there, even if the individual banks I use have terrible UIs.
Someone else mentioned this up the thread. I am a huge fan of YNAB too, but I just gave Actual Budget a try and I'm hooked. Some things are better and some things worse than YNAB, but it's open source and self-hosted. I'd recommend either.
So if CUDA could be ported to Mojo w/ AI then it would be basically available for any GPU/accelerator vendor. Seems like the right kind of approach towards making CUDA a non-issue.
Chris Latner of Apple's Swift and Tesla fame is running a company entirely predicated on this, but at the deterministic language design level rather than the inference level.
If a beam search, initiative plan and execute phase is more effective than having better tooling in a deterministic programming language then this will clearly take the lead.
Thanks for the link! I am not familiar with the company but reminds me of the whole formal methods debate in distributed systems. Sure, writing TLA+ specs is the 'correct' deterministic way to build a Raft implementation, but in reality everyone just writes messy Go/Java and patches bugs as they pop up because its faster.
That's correct - however as other commenters have noted. Doing this by hand is extremely challenging for human engineers working on tensor kernels.
The expense calculation might be
expense of improvement = (time taken per optimization step * cost of unit time ) / ( speedup - 1)
The expensive heuristic function is saving wall time well also being cheaper in cost of unit time. And as the paper shows the speed up provided for each unit time multiplied by unit cost of time is large.
Usually the rate of overall improvement for this type of optimization is less than Moore law rate of improvement, thus not worth the company investment. 17x micro-benchmarks don't count. Real improvements come from architectural changes, for example: MoE, speculative multi-token prediction, etc.
There's no sandboxing snapshot in revision history, rollbacks, or anything.
I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
For those of us who have built real systems at low levels I think the alarm bells go off seeing a tool like this - particularly one targeted at non-technical users
reply