Hacker Newsnew | past | comments | ask | show | jobs | submit | fyp's commentslogin

There's actually an entire branch of math (galois theory) that spawned from the fact that it doesn't generalize!

In particular, there's no formula for polynomials of degree 5 or higher.

https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem


Sure, but there are analogous cubic and quartic formulas that follow somewhat similar derivations. Here's a fairly complete derivation of the cubic formula and a slightly abbreviated derivation of the quartic formula: http://math.sfsu.edu/smith/Documents/Cubic&Quartic.pdf


The (classical) derivation of the cubic solution involves square roots of negative numbers. Hence it either requires unsatisfying hand-waving, or knowledge of complex numbers. These things come up quite a bit later to the much simpler "solve a quadratic formula" question.


Right.

Any vaguely rigorous derivation of the cubic formula requires dipping into complex numbers, because the discriminant of the cubic can be negative. You can just choose to ignore that case and focus only on nonnegative discriminants, but that's no fun, is it? :-) Although, that is how it was "classically" done.

If you want to find the complex roots, but don't want to deal with them in the derivation, you can just take the real root and multiply it by (-1 ± i √3)/2. Knowing that all you have to do is this, of course involves some much heavier algebra than would have been available classically, but, at least it's easy for students to understand (if they have the proper group theory and/or Galois theory background), or, at least to understand a plausibility argument.

After looking at Wikipedia, I found that Lagrange had an interesting method[0] I didn't know about, in which he considers the discrete Fourier transform of the roots rather than the roots themselves. That approach makes some sense, because the whole power of the Fourier transform is to turn gnarly multiplication problems into simple addition problems.

I also like this method, because it's honest about needing to dip into the complex numbers to find all roots. Also, it's kind of a cool historical note, because he was trying to solve the general problem of finding all roots of a polynomial, and hoped this method might generalize.

Edit: here's a lesson plan from Stanford that walks through all the necessary algebra in 5 class periods: https://web.stanford.edu/~aaronlan/assets/symmetries-and-pol...

---

[0]: https://en.wikipedia.org/wiki/Cubic_equation#Lagrange's_meth...


Knowledge of complex numbers are required to solve quadratics too (when the discriminant is negative).


Not to solve real quadratic equations with real solutions. In that case, negative discriminants mean there is no solution.

Whereas in the cubic case, you can come across Square roots of negative numbers when finding real solutions in real equations.

Hence, you can treat the quadratic case within the world of just the reals. Whereas the cubic case doesn't work that easily when working with just the reals.



There are a lot of similar failure modes in humans.

Sometimes they're are used for good, like making drivers slow down: https://www.insider.com/optical-illusions-3d-crosswalk-drivi...

But other times they cause crashes: https://imgur.com/a/kYr94


>There are a lot of similar failure modes in humans.

there are lot of similar failure modes in individual humans. The difference is individual driver errors are not correlated, a bad reflection tricks a few humans at once instead of tens of thousands of cars running the same model. There is significantly more brittleness in a fleet of cars than in a population of humans, because the human population has diversity in judgement and experience.

This is not simple to fix because this uniformity is actually a feature of automated systems, it makes them explainable, produces expected results and conform and cheap, training a human for 20 years is more expensive than training all the cars once. However it also makes them collectively vulnerable, which is why heavy machines tend to be locked away on factory floors.


True. People need to stop comparing self driving cars favorably to the worst performing humans. Gives companies license to release these badly performing cars into the streets.


They are assistive systems. The only thing that's badly performing are the users who misuse them and get themselves into trouble.


hard braking or swerving for clear false positives is not very assistive...

it seems to be the opposite of assistive. How can a person know all the weird things that will make a computer vision system badly fail. It shouldn't be used at all if that is the case



I wonder if these self driving cars would recognise those 3D crossings as the same as the normal thing. I could see it looking quite different to a neural network or something looking for features and a bit overfitted.


Those examples in the first line are so bad, I hope they don't catchup.


Some places in the Netherlands tested those. Imho they don't work as advertised. On a picture they might look 3d but when you driving towards them your brain knows the perspective doesn't change and doesn't trick you into thinking it is 3d.

The effect is more like: What is this for strange art!?


I don't know the details but I heard the original codebase was javascript but they used an automated script to convert it to coffeescript as an hack week project! That's some serious long term damage they did in that one week.

Edit: Found the 2012 post, pretty useful for doing a postmortem https://dropbox.tech/application/dropbox-dives-into-coffeesc...


I don't view it so black and white. While certainly parts of how we ended up using coffeescript weren't great and ended up causing lots of problems, in 2012 it looked like a net win - we got a bunch of new language features several years before they were actually standardized in any form.

It wasn't until 2015 that es6 came around, and we saw how things could be better; and typescript didn't have much momentum until 2015 either. Without a crystal ball, it'd have been very hard to predict this shift.


I wonder if Dropbox devs would say CoffeeSctipt was a net negative. The authors mentioned how it made supporting IE11 easier for example.


The devs that did it probably quit immediately after and started a consulting company where they advised all their clients on how to convert to coffeescript too


Another similar project by Mark Rober for darts: https://www.youtube.com/watch?v=MHTizZ_XcUM


He mentions it in the video at around the 1:53 mark.


This is probably premature optimization, but what's the performance cost of updating a deeply nested field inside a jsonb? What about indexing it?

(compared to storing in some normalized form or in other popular document stores)


Uber/airbnb/lyft were considered in the same tier as facebook/google before corona. With google also slowing hiring, are the engineers laid off all going to be absorbed by facebook?


This will play our more like the dotcom burst than the 2008 crisis.

For most younger people here this may come as a shock but tech jobs can disappear, for years.

The job market in 2019 felt a lot like the job market in 1999, people rushing to get upskilled so that they could get into software and make ridiculous money. After the dotcom burst huge number of people left tech, or at least left the world of software development. Salaries also went way down for nearly a decade.

A common comment here on HN with layoffs is "well they're just getting rid of superfluous employees, so this is no big deal". This assumption is both ridiculously callous about real people losing job, it also underestimates how much work is superfluous in SV.

It is very possible that tech will take a serious hit this time, and that the total number of software engineers, data scientists, dev ops people etc will go do... for years. And likewise salaries will also drop (TC automatically does this thanks to the magic of RSUs).

Certainly people will need software and software engineers, just likely not nearly as much as they do now. Once FAANG realizes there's no longer a need to keep talent off the market, expect salaries to take a dive.


People have been calling software engineering wages a bubble for 20 years, and trying to call “the end of tech”. I just don’t see it, highly proficient software engineers are more valuable than ever. The wages are higher than ever. Going forwards, tech is really the only engine of growth in the American economy. Trying to say tech will be under for years is like saying the USA will be in a massive recession for years. I just don’t believe it.


What’s happening now is very analogous to the 2000 dot.com crash. The economy overall will recover, albeit slowly. However companies are questioning why they need 100 engineers instead of 50. That doesn’t just reverse itself overnight. As the OP said those that weren’t around the last time might find it hard to believe that tech jobs really just disappear for years.


No it’s not. The fed printed so much money and dropped rates so low, there’s a massive excess of capital looking for business opportunities. Monetary policy is what is driving the economy, and it’s making this bubble bigger than ever. You can’t ignore economics and cheap money. We will have ten more we work like companies.


Rates are low now but it will take time for this to get reflected in VC/PE/corporate dollars to invest. The addressable market of consumers/enterprises that are willing to spend money on non-essential products/services is down and will drop further since budgets are being tightened and fat is being cut. The 2008 unicorns that were born out of the last recession had to provide oversized returns to investors based on them addressing needs well beyond what cash-strapped consumers/enterprises were willing to spend on. We're in for a ride.


Not really. You are underestimating the magnitude of money injected into assets.


[flagged]


Educate me then, why am I wrong?


You aren't wrong. Most of the central banks loans went into inflating the price of stocks, bonds, and real estate. Inflation in those assets has been running over 10% a year. While goods, services, and wages has been running 2% per year. That's all due to central bank operations.


When you've got 100s of thousands of unemployed talented engineers on the market, you're not going ot be able to sustain $200k salaries for people who know how to write a yaml file for long.


Not when the FED is printing so much money. That money has to go somewhere, and a lot of it ends up funding new startups and new corporate debt.


yaml's are sometimes difficult to write.


Which is gonna suck for a lot of people in major metros, as those salaries were necessary to keep up with asset inflation post-2008.

Wariness about the future, spiced with a tinge of schadenfreude.


I should point out, as I have no attachment to my karma, that the schadenfreude comes from the fact that those inflated asset prices were used to justify and fund gentrification and displacement. If a crash ruins a few of the entities profiting off that, well, chef's kiss.


Well guess what those assets won't be able to sustain themselves without those salaries.


Counterpoints:

- before Coronavirus, there was a shortage of people to work in software (if you believe BLS)

- software employment represents a very small portion of the labor force

- there are many problems that have just happened that will require software to solve

- while there are undoubtedly industries that will be hard hit, there are many software-intensive sectors that will see growth as a result of the crisis (ecommerce for one)

- the world in 1999 was much different. the internet was a curiosity, now it's essential.

- and finally, companies that are looking to implement efficiencies (read: lay people off) turn to .. software

edit, one more:

- many software positions over the past decade+ have been filled by immigrants. Going forward the US is going to be much less friendly to immigration, so competition for jobs will go down for US workers


Would this be a bad time to be in a job hunt to change companies? Anticipating that there may be a cohort of newly laid off SWEs from prestigious brand-name companies (Uber, Lyft, potentially others) flooding the market?

I'm guessing I, as a lowly, humble, SWE from an investment bank, would be easily looked over in favor of those aforementioned veterans/alumni of highly regarded Silicon Valley companies. Unless if I make a lateral jump to another bank or hedge fund, which I have no desire to do so.


I’m in a similar situation and am still getting messages from FAANG recruiters. I think you would be fine.


You're making a sweeping assertion with zero evidence to back it up.

One obvious counterpoint to this argument is that this recession was not caused by markets realizing that many publicly listed tech companies actually had no market and no path to profitability.

Also, non-tech companies took on a large number of engineers in the late 90s to address Y2K - many of whom got laid off afterwards. Not an issue here.


I totally agree with this perspective: salaries will go down, the number of jobs will decrease.


> Uber/airbnb/lyft were considered in the same tier as facebook/google before corona

So was wework by many. Facebook/google cannot be more different from uber/airbnb/lyft.

FB/G are in high margin business, operating largely in virtual space, where scaling to new markets has low marginal costs.

Uber/airbnb/lyft are in historically low margin, capital intensive space, where scaling to new markets has very high marginal costs. Same as wework is an office rental with an app, so is uber, a taxi with an app. App part is cool, and gives them meaningful advantage, but it's still a taxi business, and they were not able to change that.


Odd that you say that. I got an offer from WeWork, which I declined because of weird cult-y vibes I was getting during the interview. A few months later they blew up.

I got rejected by Uber, was too scared of rejection to try Airbnb or Lyft. But I considered all three a tier (or more) above WW.

I'd have hands down, without hesitation, accepted a job from Uber even if I would have been laid off at some point down the road. Having Uber (or Lyft, or Airbnb) on the resume would open doors I imagine having WeWork (even disregarding the whole scandal) would not.


GP referred to the prestige as a workplace for SWEs and not to the viability of a business.


Facebook is surely in a similar ad dollars crunch as Google. I can't imagine they're full speed.


Based on what we've heard from Google yesterday, and FB's Q1 results just now, it looks like the much heralded ad crunch is turning into not much more than a speed bump.


But we need to wait for Q2 results to know the full impact, right?


Yeah, but in Google's earnings call yesterday they said that the situation is stabilizing.


They are also seeing record traffic so they might be doing alright. We'll find out when they report earnings later today.


I mean record traffic means record server usage with record low ad spend from their customers.

However, that's just a short-term hit. FB and Google may very well increase their own spending so they can capture the huge amount of people who are now using their platforms.


I'm not sure that's the case, as they never made it into the FANG acronym. Even Twitter doesn't quite fit up there.


AFAIK, the FANG acronym was meant to represent the FAANG stocks, not the perceived quality of engineering within the industry.


I didn't know that. I thought it was because of their engineering chops.


I'm pretty sure it was stock based. For one, Netflix is like, completely different than the rest. They only hire senior engineers and basically substitute a talent pipeline with giant wages.

FAANG internships always sounded weird, considering, y'know, Netflix doesn't do internships.


I think Netflix is just in there because FAANG sounds cool.


I always thought Big 4 was because they were basically the top 4 biggest market cap companies in the US


I’m not sure if Netflix ever had market capitalization higher tahn Microsoft


The term was coined by Jim Cramer as the group of well performing tech stocks


Maybe calling them FAANGULA would have triggered some unfortunate associations…


Microsoft is hiring for Jedi like crazy.


I have not read it myself but the answer can probably be found in the book "The Science of Interstellar" [1]

Kip Thorne, a Nobel prize-winning physicist, worked as the science advisor for Interstellar so the hollywood bs is pretty good!

[1]: https://www.amazon.com/Science-Interstellar-Kip-Thorne/dp/03...


Isn't that the wrong direction for the optimization? I would assume you would want to compile adding two numbers into shifting by one, not the other way around.

(I know nothing about hardware, it just intuitively seems like moving a bunch of bits over by 1 should be faster than dealing with xor and carries)


In hardware terms, adders are simpler than shifters. You can usually do both in a single cycle, but it's going to be lower power to do the add instead of the shift.

To put this in more concrete terms: an N-bit adder involves N 1-bit stages to add each bit, and then a 1-bit carry network on top of that, which has N stages in it. So overall, it's O(N) in terms of hardware. An N-bit shift unit is going to use lg N N-bit muxes--or O(N lg N) in terms of hardware. Total gate delay in both cases is O(lg N), but adders have O(N) hardware (and thus energy consumption) while shifters have O(N lg N).

A secondary consequence of being larger area is that a superscalar architecture may choose to have one execution unit that has an adder and a shifter and a second that only has the adder. So an addition may schedule better than a shift, since there are more things it can execute on.


> To put this in more concrete terms: an N-bit adder involves N 1-bit stages to add each bit, and then a 1-bit carry network on top of that, which has N stages in it. So overall, it's O(N) in terms of hardware.

O(N) adders cannot meet the latency demands of modern high-frequency CPUs. The actual complexity of adders in real CPUs is usually O(N²).


There's some hardware that surprisingly doesn't have a real shift but instead has rotate operations. These will take the bits that get dropped off and put them on the other side, in those cases the addition can be a much better choice than doing a bitmask and then rotate operation. These types of hardware are usually embedded devices that also have high cost multiplication instructions too so unrolling to a smaller number of fixed additions can actually perform better sometimes.


> it just intuitively seems like moving a bunch of bits over by 1 should be faster than dealing with xor and carries

Yes, a fixed shift-by-one unit would be much simpler than an adder. But many (most?) CPUs that supports shifting have generic shift units, where the number of bits to shift varies, and that makes them much more complex.


Barrel shifters are still frequently omitted from microcontrollers. Primarily because of their size.


Right, I was thinking mostly of "application-level" CPUs capable of running Android.


Realistically when you get into real world questions about performance the only way to be sure is to measure it.

In this case I imagine you're right. Although also worth pointing out that due to the way modern CPUs are basically frontends to generate uOps that it could actually perform the optimization by itself anyway. Time to break out PAPI (very cool tool for anyone unaware, you can get instruction level profiling in your program with basically 4 function calls and a header file).


Even with worst-case scenario of bias, this is still extremely good news. My worry was always with super spreaders who refuse to isolate but the data is suggesting that those spreaders will still soon achieve herd immunity among themselves. As long as the rest of the society behaves we will still hit zero cases relatively quickly.

(which wasn't always clear to me before since I initially predicted that this will take years to work out)


I don't think you are looking at this right. It will still take about a year (+/- a few months) for this to run through the population of the country. NYC spreads faster and was hit harder and earlier. Still only about 1/5 have had it. The remaining 4/5 (maybe 3/5 if there is some fraction of people that just are naturally immune/resistant) are going to get it. It's going to be a while to get there.


What?

All of these numbers are inline with what experts have been saying and modeling for more than a month. We will be dealing with this for at least another year. We will cycle through policy to loosen up and close back down a little. In the best case for NYC they are 1/4th of the way to herd immunity in numbers. With the drop in transmissions, this might be 1/8th of the way in terms of time. There will be more deaths in the future than have been recorded so far.

Look for the bright side in things, but zero cases is a pipe dream.


Herd immunity doesn't mean 100% exposure. It means a high enough incidence of antibodies such that the effective R0 goes lower than one, meaning that new outbreaks tend to shrink over time and not grow.

With most endemic viruses, antibody incidence is somewhere around 30-50% I believe, but I haven't seen any modelling for what covid is expected to do specifically.


If there are a high number of asymptomatic/mild cases that are to infectious to others this means it will have to be more. i've seen numbers between 70% and 80%. Thats why i took 1/4th to mean 80%


25% prevalence is absolutely not in line with what experts were saying on March 23rd. Nor was an IFR of 0.5%.

The CDC reports that flu has an IFR of ~.13% in the US (61,000 deaths out of 45 million cases). That makes 0.5% roughly 3.3 times worse, not 10.

Also, herd immunity does not require 100% having positive antibodies, it will show an effect on Ro starting around 65%.


Bear in mind that the CDC estimate of 45 million influenza cases[1] is the number of symptomatic cases, and therefore it doesn't really make sense to directly compare that with Covid-19 IFR rates calculated from antibody studies which include both symptomatic and asymptomatic cases.

[1] https://www.cdc.gov/flu/about/burden/2017-2018.htm


It's not 25%. 0.5% versus 0.13% is not the only issue here in terms of how much worse it is. It's the long time in the ICU. The flu kills you fairly quickly or you get better fairly quickly, so you don't take up hospital capacity so long. Herd "immunity" does not require 100%, but that's a decent approximation. Sure, I'll grant that it "starts" to show an effect around 65%, but the effect is not so strong. 70%, much stronger. 80% very strong. Heck, you could probably do containment by then without waiting to get to 100%. Because inadvertent spreading would be so low.


it would show an effect on r0 immediately, but around 65 is when r0 would hit 1 (since it's got 2/3 fewer chances to spread and starts at about 3)


Superspreaders aren't connected to each other.


> those spreaders will still soon achieve herd immunity among themselves

that's not what herd immunity means, unless we isolate those spreaders so they don't get in touch with "non-spreaders"


The non-spreaders are the people who are self isolating.

I think they're saying that people who go out and spread the disease will quickly catch it, recover, and be unable to catch it and spread it. That relatively small group of people who are refusing to self isolate, will gain herd immunity, causing the virus to die out in that group of people, preventing them from giving it to the non-spreaders that have been at home the whole time.

Whether or not it will work out that way, I don't know.


While this dynamic may exist in some form, I don’t think it’s a powerful enough affect to stop the spread. There is not a firm dichotomy of spreaders vs. isolators and the composition probably changes over time, such that the virus still has many opportunities to spread to previously-isolated groups.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: