Hacker Newsnew | past | comments | ask | show | jobs | submit | ickyforce's commentslogin

> So unless those pools are full of primitive data types, you’re going to cause more frequent full GC pauses, and longer due to all the back pointers.

If you have enough objects pooled to get low allocation rate then you never trigger full GC. That's what "low latency Java" aims for.

I think that the main downside of object pools in Java is that it's very easy to accidentally forget to return an object to a pool or return it twice. It turns out that it's hard to keep track of what "owns" the object and should return it if the code is not a simple try{}finally{}. This was a significant source of bugs at my previous job (algo trading).


You only get low full GC as long as the entire app makes no allocations, not just the hot path. If you allocate garbage at all eventually you will trigger one.


> each world was spherical (...) suddenly orienting the map was a challenge

The other downside was the fact that it was possible to assault arbitrary points - e.g. drop units from orbit in the middle of enemy base. It required much more awareness about the whole map. In 2D game one could mostly focus on front lines and choke points.


CME documentation states:

"LMM – CME Designated Lead Market Makers are each allocated a configurable percentage of an aggressor order quantity before the remaining quantity passes to the next step."

Certain matching algorithms allocate orders to LLM before considering FIFO - specifically algorithm T (LMM w/o Top). So I guess it depends on how strict is your definition of "FIFO order queue".

https://cmegroupclientsite.atlassian.net/wiki/spaces/EPICSAN...

https://cmegroupclientsite.atlassian.net/wiki/spaces/EPICSAN...

(I don't know the internals of the mentioned exchanges)


I have tried various approaches and here's what worked best, assuming that there is some natural way to partition most of the data (e.g. per account):

1. Init the DB with some "default" data - configuration, lookup tables, etc

2. Each test in the test suite owns its data. It creates a new account and inserts new records only for that account. It can for example create users on this account, new entities, etc. It can run multiple transactions, can do rollbacks if needed. It is important to only touch the account(s) created by the test and to avoid touching the initial configuration. There's no need to clean up the data after the test finishes. These tests can run concurrently.

3. Create a separate integration test suite which runs sequentially and can do anything with the database. Running sequentially means that these tests can do anything - e.g. test cross-account functionality, global config changes or data migrations. In practice there aren't that many of those, most tests can be scoped to an account. These tests have to clean up after themselves so the next one starts in a good state.

Other approaches had tons of issues. For example if each test is wrapped with a transaction which is later rolled back then testing is very limited - tests cannot use transactions on their own. Savepoints have similar issue.


Some of these are just "nice" but I'd probably buy this one: https://app.suno.ai/song/abddd209-4ad7-469d-82b9-f0117db0e51...


That... was amazing. Were the lyrics AI-written too, or is it a pre-existing text?


I don't know, it wasn't generated by me, I just found it in the "explore" tab. I tried to generate something similar and this came out: https://app.suno.ai/song/24dddb7b-5a10-45a2-8f52-94d627030c3...

I generated lyrics with chatgpt4 + some manual tweaking.


I didn't expect the twist there, nice work on the lyrics!

I feel the suno-generated song itself lacks congruency. Like, you could listen to it 10 times, have the lyrics in front of you, and not be able to sing along.


> GIT is already best model for keeping history of textual changes

Git doesn't even keep history of changes, just snapshots before and after the changes. A very common problem is viewing history of a single file as it gets modified and renamed - this information just isn't stored. It's common for tools to show file history incorrectly - i.e. a previous version is removed and the new one (with small changes) magically appears.


This. Git is not an Edit history. Another surprising (at least to me) thing is that you can't add empty directories (without placeholder files) : https://stackoverflow.com/questions/115983/how-do-i-add-an-e...


This is not an uncommon behavior in source code control systems, e.g. both Perforce and Mercurial behave like Git too.

E.g. see https://portal.perforce.com/s/article/3447 for Perforce.


Yet, it's wrong.

Just the fact that people keep using placeholder files should be enough to convince anybody. But if you want to use git for software development, well, almost all development conventions mandate directory intentionally kept empty or to exist before their contents exist. I've never seen anybody decide on something that doesn't.


> almost all development conventions mandate directory intentionally kept empty or to exist before their contents exist

Never heard about this, nor can I imagine what purpose it could serve.


As an aside, aren't folders in *nix also files ? So how did this happen with git, written by Linus ??


Object stores like S3 work similarly, the entire path is to an object, and unless there is data at a path, the path doesn't exist. And you don't need to create the prefix before storing something with that prefix. That the tooling abstracts that a way to make it look more like a filesystem is a layer on top.


My pet peeve is that in enterprise development is that files grow into monsterous god objects: ten of thousands of lines long. There is no way to track that a single file was split into multiple files. They are all sucessors of it. When I go to a split of a file I want to see the history and blame of the method not just "brand new, created yesterday". This has led to the "pattern" of never moving anything because you will get blamed for it in the marginalia and it will be up to you to pull the old version and find the real author.


Have you tried `tig`? I can't remember trying out exactly this, but I wouldn't be surprised if it has better support than `git` for this kind of thing?


I would argue the reason that git is the answer going forward is that what you describe is a UX issue, not a data structure issue.

There's nothing stopping a UI tool from figuring out that a file was moved or split into multiple files -- it just has to do so from looking at the state before and the state after. Git has gotten better at this over the years and keeps improving, so newer versions of git will be able to look at existing git repositories and synthesize better views of the history.


> Git doesn't even keep history of changes, just snapshots before and after the changes.

This seems like splitting hairs given you can trivially derive one from the other.


It's not just the type of work but also experience. Here are two cases:

1. After working for many years in Java I needed to build a service. I spent few days designing it and then a month on implementation. I used DBs and libraries I knew very well. I didn't need to access google/stackoverflow, I didn't need to look up names of (std)lib methods and their parameters and if something wasn't working it was fairly obvious what I needed to change.

2. Recently I wanted to create a simple page which fetched a bunch of stuff from some URLs and showed the results, simple stuff. But with React since that was what frontend team was using. I never used React and rarely touched web in the recent years. Most of my time was spent googling about React and how exactly CORS/SOP work in the browsers, and with polishing it took a couple of days.

I'm pretty sure that in case 1) AI wouldn't help me much. Maybe just as a more fancy code completion.

In case 2) AI would probably be a significant time save. I could just ask it to write some draft for me and then I could make few tweaks, without having to figure out React.

But somehow nobody quantifies their experience with the languages/tools when they are using AI - I'm sure there's a staggering difference between 1 month and 10 yoe.


In my opinion PR/changeset description is exactly what should be in the commit description. In cases were we had 1 commit per PR (i.e. squashing before merging) just copying the PR description into merge commit worked really well - the goal for a PR description and a commit is essentially the same.

I wish github allowed to make the copying automatic and ensure that it happens (it doesn't, unfortunately).

If someone wants to learn the full history - the remarks during code review, perhaps all the WIP commits - they can read the PR/code review comments. I found it to be very rarely needed.


This is exactly how Microsoft Azure DevOps works when you enable the squash-on-merge behavior (which is how we used it while working at Microsoft). I thought this was completely logical and I'm surprised that GitHub can't be configured the same way.

All of our commit messages were nice, long and detailed, with a link back to the PR if you really wanted to go back and see the individual commits and/or discussion that occurred on that PR. I think I only looked at individual commits maybe once or twice since they were usually useless in isolation (woops, WIP, fix typo, etc.).


settings > allow squash merging > default commit message > pull request title and description


Oh, nice, thanks. I haven't been using github recently, it looks like this option was added ~1.5y ago.


> I don't see how replacing the "it works on my machine" with "it works in this VM" is much better. It is the same problem.

It's not that. It's replacing "it works on my machine and I'm not sure why it doesn't on yours" with "follow these exact steps and it will work on your machine too".


Except when it doesn't, because not all problems are caused by the machine's configuration, and software that displays the "it works on..." feature tend to break due to the most diverse and uncontrollable causes.


If a large portion of the machine's configuration is managed rigorously the software will only break for novel "uncontrollable causes" outside this boundary, requiring learning and permanent process improvement without wasting time fixing the same issues repeatedly: an ideally efficient situation, not a problem.


Software should be robust to most uncontrollable causes. Not fail due to small changes.

It's perfectly reasonable to use containers to solve dependency management. But it's a really bad form to have containers as the only viable way to distribute your software, and a very reliable indicator that your software won't work at all.


> I'm very curious to hear from someone who likes DI in a non-OOP language - why?

The alternative to DI is keeping dependencies in a global state (global variables, singletons, whatever) and that is a maintainability nightmare for any codebase above a certain size. I don't think that it matters if the language is OOP or not, the issue is the same.


modules just do it better


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: