Hacker Newsnew | past | comments | ask | show | jobs | submit | smg's commentslogin

Can you please tell us more about how you used Ray for setting up the RL infrastructure?


Oh good question. Actually speaking at the Ray Summit next week in SF so we will talk more about it. We used Ray throughout the pipeline for running evals, for the RL controller, for data collation, and for visualizations. One tool we found helpful was Ray Data which let us easily scale over data and run logs.


Please share more about Ray Data use case.


We use Ray data for our map-style processing jobs. For example one tool have runs over all the rollouts from the RL system and collects qualitative statistics to understand which type of agent trajectories are being reward, and what types of searches and terminal commands are being made.


Does this mean that all the documents (emails from tech executives) that are part of the evidence for a court case will be publicly available?


No, generally evidence is not filed as part of a docket. PACER only includes court filings such as the complaint, answer, motions, orders, etc.


And quotations from evidence produced during discovery are often redacted from those filings.


They are already publicly available. You can access up to, IIRC, $30 worth of PACER for free each month. I know my access usually comes under the free boundary.

But as poster below mentions, it depends on how the documents exist. There are two types of documents in a court case: common law record and discovery.

Discovery = evidence kept by both sides, but not held by the court. This includes written documents, audio/video recordings, but also the transcripts of any depositions that have been taken of important people in the law suit. (A deposition is where you sit down with a person and question them under oath outside of the court room)

Common law record = anything that is said in court, or filed in court.

Most discovery never makes it into a court room. It is just shuffled back and forth between the plaintiff and defendant.

Now, when it comes time to file a motion (let's say one of the parties tries to get the case dismissed) then it will usually be necessary to attach some documents as exhibits to the motion to clarify the point you are making to the judge. At that point, those documents will make it into the public record and will be in PACER.

Even if something isn't filed in court, if the court case is USA v. TechBro, for instance, then as the USA is a governmental entity it is subject to FOIA and you can often get a copy of all the discovery that way. I often use FOIA just to avoid PACER fees.


There is more to BH than how they acquire companies. They also have a track recored of out performing S&P for decades. I doubt if BH alpha is because of their streamlined acquisition process.

Does Tiny have a similar record?


The whole idea that BH is a success due to good deals is wrong. BH has made good deals and bad deals, but that's not the main reason for their long term track record.

BH has a good record because they have structured their business to have a permanent edge: Their insurance&reinsurance business (half of the business) generates cash and float constantly. The side of the company seeks ways to invest all that cash. Often buying whole businesses.

In other words, BH cuts out several levels of middlemen. They are an insurance company, holding company, private equity & alternative investment management company and investment fund rolled into one. Cutting out banks, private equity, fund managers bring in huge savings.

The reason why BH has not been outperforming SP500 in the last 10 years is that cash is cheap for everyone (this is the longest boom in history) Once the water level drops again and we see who swims naked, BH will outperform again.


> The reason why BH has not been outperforming SP500 in the last 10 years is that cash is cheap for everyone (this is the longest boom in history) Once the water level drops again and we see who swims naked, BH will outperform again.

At the start of the tech companies’ boom, Buffett famously said he does not invest in tech companies because he does not invest in what he does not understand. Then he bought a ton of Apple a few years later, and that is basically the only thing keeping Berkshire stock in the game.

The last amazing deal I can recall that Berkshire made was Goldman Sachs during the 2008 financial crisis. Other than that, I think he might have been better off buying VOO. The company itself is 40% Apple right now, for which Berkshire paid full retail price when it was bought.

I think the parameters of the game that used to allow Buffett to achieve exceptional results have long changed, as evidenced by the numbers.


Berkshire stores value when times are good, they makes exceptional deals when there is crash and they can buy good stuff cheap. They always have cash and the don't have to sell assets to buy stuff.

Unless you think that there will be no financial crisis or severe recession again, Berkshire will probably shine again.


> Then he bought a ton of Apple a few years later,

Did he buy it or was it one of the two fund managers who work for BH?


Sorry, I should have said Berkshire bought it. I do not know the specifics, but I assumed such a large investment would have had Buffett’s approval.


It was primarily Buffett (he said he bought it) but I suspect one of the other two bought some and later sold it as well.


>BH has made good deals and bad deals

Yup...famously Buffett's worst acquistion was of Berkshire Hathaway itself!

It probably cost him ~$200B.


The author does not discuss how BH chooses which companies to target for acquisition. It's just not within the scope of this post. BH's success is because they choose their targets and the price wisely. Their higher closing rate is the multiplier that scales up the returns from their good targeting.


Well the implication is that gets good prices by being a very easy acquirer which, if true, would certainly generate alpha.


They don’t need a record because they’re not marketing to investors. If you want to sell your business, this is an advert. Sellers don’t need to worry about the buyer’s alpha.


I am fluent in 3 languages. I grew up in India where just about everyone ends up learning at least 3 languages. A month is too short to really pick up a language.

Sailing or outdoorsy stuff like hiking is not my cup of tea. I work out (cardio and weights) to keep myself fit. Not planning on making any changes to my daily routines during the month


Are there specific Graph databases you would recommend? I have played with Neo4j about 5 years ago. Has there been anything revolutionary in this field since then?


Oh if you've seen it ... Cypher seems to be spreading: standardised and adopted by DBs other than neo4j, so I thought a good slow month to pick it up, play with a few of the DBs, toy project ... but not if you know it already.


Was Cinder influenced by hhvm (Facebook's vm for php/hack)? A project that maintains a list of different JIT implementations for programming languages and compares them would be a great way to see what are the different approaches to implementing JITs and which language features make it hard to implement performant JITs.

As an aside it is great that the Cinder team is specifically calling out that Cinder is not intended to be used outside of FB. Many people have been burned by lack of community around hhvm.


Definitely influenced. There are people on our team who also worked on hhvm.


> A project that maintains a list of different JIT implementations for programming languages and compares them would be a great way to see what are the different approaches to implementing JITs and which language features make it hard to implement performant JITs.

SOM for example has many implementations with different approaches to compilation http://som-st.github.io.


I'm afraid there is very little documentation/text on modern production JITters. When I tried finding any text for my MSc I had little success. Does anyone have a suggestion about e.g. .NET 3-tier jitting or similar?


> When I tried finding any text for my MSc I had little success.

Yes you basically need to sit down with an expert to learn this stuff. It's famously under-documented and extremely hard to learn how it's done in practice on your own.


Here's some good documentation about v8's JIT: https://github.com/thlorenz/v8-perf/blob/master/compiler.md

Note: Never worked on v8, just liked the information here.


I wrote down "Survey of tiered compilation in JIT implementations" when I researched this: https://github.com/sanxiyn/blog/blob/master/posts/2020-01-03...


Growing up in India, I had quite a few of these Soviet gems on my bookstand. The only thing I had that originated from the West were a pair of Levi jeans. I was shocked to find out that a system that could make those jeans would win against one that made those books. In the early 2000s I was working in SV and repeated this remark to a Russian colleague. He told me of course the system that made the jeans is superior - he did not get his first pair of jeans till he got to the US and the trousers that he wore in Russia were absolutely horrible when compared to the jeans. Jeans to him were a marvel of engineering. The fact that a system could produce affordable jeans that would last for years, which people could buy whenever they wanted by strolling into their neighborhood shop was a much bigger achievement than state sponsored STEM books


I grew up with plenty of very good Soviet books and without Levi jeans.

The jeans were a status symbol, almost impossible to find and the price was 1-2-3 salaries of an engineer, teacher or doctor.

Now that I have more access to jeans but having troubles finding good books for my kids, I would chose the books over the jeans. But I wouldn't go back to USSR.


It's also a bit a false narrative that systems win or lose: the USSR people decided they wanted to change, and they changed. The system they have now is not exactly the American system, you'll agree, and they're not dead: so what "won" ? KGB-controlled non-communist autocracy, or Liberal Capitalism ? I'd say the first :)

Now what inspire copy from other country, maybe your definition of winning, is probably the second one. But it didn't have to be at the fall of the wall, or the USSR. The problem was always the USSR, but the solution will probably rarely be the US.


How does seed compare to aws cdk pipelines

https://aws.amazon.com/blogs/developer/cdk-pipelines-continu...

I know if I go the route of cdk pipelines I will need to implement my CI/CD pipeline on my own using cdk. I want to know what are the other advantages of seed.


I talked a little bit about it here: https://news.ycombinator.com/item?id=25838954

But the big one for CDK is that it's faster and basically free on Seed.

Feel free to get in touch if you want more details! jay@seed.run


I would tweak the definition of rich provided by Sivers

Having more money than you spend for a year is not enough is not enough to be 'rich'. You want assets where you will always be able to generate more money than you need spend for every year that you live.

I think people read Siver's blog because he got to this tweaked definition of rich. This article does not seem to acknowledge that reality.


The cheapest Volta GPUs I have seen so far cost over 2K for 12GB. Can the GPU provided in this kit be used for training?


Yes, if your model is small enough or, if you are fine-tuning small number of layers. TendorFlow 1.15 and 2.0 are available on Xavier. I understand that PyTorch could be built as well.

Nite that the number of CUDA kernels and amount of memory available is smaller, if compared to descrete Volta GPUs.


You say it can do training for small models because of the presence of the small (512-core) GPU? (plus maybe some left-over, control calculations by the CPU)


It's a low-wattage device. It's performance can't hold a candle to a last-gen card that uses 10X the power.


Nope, the use of the words 'edge' and 'inference' in the tag-line pretty much mean there is no learning, no training.


Does it have "fake" tensor cores? Aren't those for training?


You still need tensor cores for inference. But they don't do weight updates. Learning/training is all about updating the weights (through backpropagation or whatever).

So another way to put it: its tensor cores do feed-forward calculations, but no backpropagation, and no weight updates.


The hardware and platform is capable of training just fine. It's just rarely done because it is slower than training on pretty much any discrete GPU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: