Hacker Newsnew | past | comments | ask | show | jobs | submit | ehayes's commentslogin

Uh, organometallic molecule? I was raised on the X-Files, I know the Black Oil when I see it.


I have a small astigmatism (and wear glasses) and I was able to do it, but I feel like if I did any more today I'd have a headache.



As an ex Red Hatter (pre IBM) this feels like top-down IBM influence, and against the always open spirit of the old Red Hat.


I'm also an ex-er but was there post acqui. It's definitely IBM influence, but it's long-time Red Hat people who are actually making these decisions. I don't think these are people that actually believed in Red Hat in the first place. Or if they did, they're willing to IBM-ify in order to keep their jobs (or ascend the ladder)


I have to agree. All IBM is asking for is to hit certain numbers. Now, those numbers might be short sighted and unrealistic, but in my experience the operational decisions were still coming from inside Red Hat, at least through the middle of last year when I left. It doesn't matter whether IBM or Wall Street is demanding double-digit growth numbers every quarter, the end result is the same. Despite a really strong push by a lot of people inside the company to put culture first, in the end, a demand for unsustainable growth is going to wreck any culture, no matter how strong.


It's exactly this. When I worked at IBM some years ago via another acquisition, this is the same pattern I saw. Senior leadership -- people who had been at the prior company for nearly two decades -- were the ones gladly steering it aground in order to do what IBM wanted.

You can see this with Red Hat - the Chairman and the CEO are both long-time RHers. Were they just the opportunists?

All the talk about running RH as an independent subsidiary was either smoke and mirrors or the IBM management style has infected the leadership at Red Hat.


> or the IBM management style has infected the leadership at Red Hat.

Tbh, if it hadn't, they would have been outed long ago.


Yeah but they probably all share DNS or something related to networking


Hmm, will it turn out to be DNS?


I'm guessing so, given the breadth of the outage.

(It's always dns.)


In the various tools for determining blame:

https://isitdns.com

https://shouldiblamecaching.com

https://bofh.d00t.org

nc bofh.jeffballard.us 666 # I am surprised this is still working given its age; the CGI version at https://pages.cs.wisc.edu/~ballard/bofh/ doesn't work anymore (but the page is still up)


DNS...Developers Need Solutions


Another Rails on Lambda framework is Lamby: https://lamby.custominktech.com/


What is a good algorithm-to-purpose map for ML beginners? Looking for something like "Algo X is good for making predictions when your data looks like Y," etc.



This is what I would have replied with too.


xgboost. always xgboost. it will scale all the way from college kaggle problems (where it is the top performer almost always) to cloud scale.

xgboost is one of the few frameworks supported by Sagemaker, etc


Vowpal Wabbit is not the best anymore, but it is incredibly simple. You train it by piping text files in, then pipe your input into it for predictions.


Tsk to whoever downvoted this. Simple linear models are indeed the right starting point for most new projects while you come to grips with your data.

In some cases you can stop there or apply a quick nonlinearization like Fastfood to get good, snappy, and generally debuggable results for very little RAM.

In other cases you move on to decision tree ensembles or neural networks, depending on whether you already have features or need those to be learned, too. Either way this ratchets up the complexity and resource requirements.

Decision trees in particular tend to have bloated implementations. I still use XGBoost or Scikit for training, but wrote my own library to translate the models into a more efficient format (~95% smaller than Scikit) and have thread-safe inference.


Thanks for the reply! What is Fastfood though? I can't find anything on Google.


Of course. The paper is at https://arxiv.org/abs/1408.3060.

> Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and/or require real-time prediction.

Sadly Fastfood didn't quite make it into Scikit[1], but did land in scikit-learn-extra[2].

1. https://github.com/scikit-learn/scikit-learn/pull/3665. A shame, Scikit's equivalents scale very poorly.

2. https://scikit-learn-extra.readthedocs.io/en/stable/generate...


https://www.linode.com/products/bare-metal/ —still prefaced with "coming soon" but it looks like you can "create your build".


Old enough to remember upgrading the RAM in our home PC so we could play Doom —quadrupled it to 4 megabytes.


I remember a friend having a 64MB computer and struggling to play Diablo II. It would always lag. I upgraded him to 512MB using some old RAM sticks. The game never lagged again.


So many upgrades to meet “minimum” specs back then - I recall scrounging around to find a video card with just enough RAM to run SimCity 2000.

Now almost all upgrades are to make it better, not make it work at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: