The title of this article is a manipulative propaganda.
If you don't bother reading it, it suggests that Edison hired people on H1B as their own employees and then fired existing employees.
The facts in this article are that Edison fired some IT employees and instead hired contractors from Infosys and Tata.
Those contractors are employees of Infosys and Tata, they don't work for Edison. Implying that they do is the first attempt at misleading readers.
Another fact is that some of Infosys and Tata US employees are H1B visa holders.
So it's possible that some of those H1B holders will end up working but the article doesn't actually make that statement other than a hearsay (a statement made by someone (in this case apparently an Edison worker being laid off) but not verified by the "journalist" writing the article) i.e. it's quite possible that none of the people who will replace Edison employees are on H1B.
We just don't know and the article either is so poorly written as to hide the facts behind the headline or is maliciously or incompetently distorting the facts.
Really? That is quite a bit naive.
I can tell you there is no question about who will be replacing the workers, this is exactly the same runbook every corp who has hooked up with these body shops.
There will be a mix of H1B onsite and offshore, just like Infosys/Tata/Wipro/etc have been doing for nearly two decades.
The FTE folks will be required to train their replacements with enticements of severance, and continued employment for a few months. Heck, even some of the times, the body shops will end up hiring those FTEs at same or lower rate (with less benefits), until they can replace them offshore. Management will give the remaining employees the bullshit spiel about global workforce, staying competitive and how these wonderful partners have hundreds of thousands of phD's just waiting to roll up their sleeves and bring leverage and major vendor relationships to the table.
I'm not sure why Jeff Sessions (R-Alabama, who raised the issue) is only now making noise, or why computerworld is just now printing articles.. but this is a surprise to no one, unless they have had their head in the sand.
Obviously you have way more information about this, but judging from what you've disclosed, it's a terrible investment.
Stock in a company is worthless (literally, as in "worth $0") until 2 things happen:
* the company gets sold
* the company does an IPO
Those are hard for all kinds of companies but next to impossible for a consulting company.
Consulting companies don't scale they way product companies do.
Revenue of $20k/month is nothing (this is what Google spends on a good engineer per month). They don't make enough to pay you a market rate.
How is this company going to ever get to multi-million dollar a year profit (a condition necessary but far from sufficient for a sale or IPO) ?
You seem to be so pre-occupied with details of the deal that you seem to be missing the obvious thing: investing in this company looks like a terrible idea because there's no way this company is going to be so successful to warrant a sale or IPO.
The same goes for patents, for example http://www.nolo.com/products/patent-it-yourself-pat.html (although the very fact that you think about patents to "secure intellectual property" betrays a naïveté about how those things work in real life. If you don't have tens of thousands of dollars (at a minimum) to spare for a patent lawsuit, then obtaining a patent is pointless).
MIT license requires attribution ("The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.") so removing you copyright violates the license.
Realistically, there's not much you can do.
You could sue that person but it would be costly (lawyers fees) and the outcome is unknown (those things usually are not litigated so there's little case law to fall back on).
You could send them an e-mail to the effect of "I noticed you've removed my name from your fork of my project. It's not cool and violates the attribution clause of MIT license. I would appreciate it if you restored my copyright attribution.". The important thing would be to not be too forceful or else you might end up with the opposite result (i.e. they'll just dig in).
I write it fully acknowledging that programming language flamewars are pointless, but this article just shows that you don't even have to try hard to create a biased comparison.
Here's the essential difference between Go and Erlang: Go gets most of the things right, Erlang gets way too much wrong.
So what does Go gets right but Erlang doesn't:
* Go is fast. Erlang isn't
* Go has a non-surprising, mainstream syntax. You can pick it up in hours. Erlang - not so much.
* Go has a great, consistent, modern, bug-free standard library. Erlang - not so much.
* Go is good at strings. Erlang - not so much.
* Go has the obvious data structures: structs and hash tables. Erlang - no.
* Go is a general purpose language. Erlang was designed for a specific notion of fault tolerance - one that isn't actually needed or useful for 90% of the software but every program has to pay the costs
* Go has shared memory. Yes, that's a feature. It allows things to go fast. Purity of not sharing state between threads sounds good in theory until you need concurrency and get bitten by the cost of awkwardness of having to copy values between concurrent processes
You just have to overlook ugly syntax, lack of string type, lack of structs, lack of hash tables, slow execution time. Other than those fundamental things, Erlang is great.
> but this article just shows that you don't even have to try hard to create a biased comparison.
So, let's see you detailed unbiased analysis...
> * Go is fast. Erlang isn't.
Hmm. Given that you accused the author of making biased comparisons, I would expect some more detailed information there. "Faster" doing what? Assembly is faster, Go isn't. Ok, let's use assembly then.
> * Go has a non-surprising, mainstream syntax. You can pick it up in hours. Erlang - not so much.
I think Erlang syntax is small and self consistent. If ; and . are the biggest stumbling blocks to learning a new system. Ok, maybe Erlang is not for you.
> Go is good at strings. Erlang - not so much
Erlang is very good at binaries. It can even do pattern matching on them. Decoding an IPv4 packet is 2 or 3 lines of code only.
> Go has the obvious data structures: structs and hash tables. Erlang - no.
Erlang has obvious data structures -- maps, lists and tuples?
> * Go has shared memory. Yes, that's a feature. It allows things to go fast. Purity of not sharing state between threads sounds good in theory until you need concurrency
Quite the opposite. You can get easy concurrency if you don't share thing between concurrency contexts. You also get low latency response and non-blocking behavior.
Erlang shares binary objects above a certain size behind the scenes as well so those don't get copied during message passing.
It also has Mnesia, a built-in distributed database. It is used heavily by WhatsApp to share data between primary and back-up instances of processes running on different machines.
> You just have to overlook ugly syntax, lack of string type, lack of structs, lack of hash tables, slow execution time. Other than those fundamental things, Erlang is great.
Ok it looks like we only have to overlook syntax. Sound good to me then, I can handle . instead of ; and I will also learn and use Go because both are very cool and interesting tools.
> Assembly is faster, Go isn't. Ok, let's use assembly then.
That's pretty flawed logic. It can be used to dismiss any valid statement. "B is better than A at property C." "Z is better than B at property C; let's ignore the valid A vs B comparison."
What matters is how much better something is at a given property, how valuable that property is (to you/your tasks), and what the cost that improvement is - in the context of the big picture.
I like and use Go not because of any single aspect, but because I enjoy the entire package. It has some good parts and some weak parts (e.g., I wish the `go build import/path` command would be consistent in generating/not generating output in cwd regardless whether the target package is a command or library - that way it could be reliably used to test if a package builds without potentially generating files in cwd).
Exactly. I was replying to his message and mocking his way of conduction a conversation. That should be viewed with that in mind.
> "B is better than A at property C." "Z is better than B at property C; let's ignore the valid A vs B comparison."
Yes. I think you probably want to direct that at gp post, not my post ;-) You also probably want to use different capitalization for properties than to entities (or at least letters from the other end of the alphabet), like say B is better than A at property x
> I like and use Go not because of any single aspect, but because I enjoy the entire package
> Hmm. Given that you accused the author of making biased comparisons, I would expect some more detailed information there. "Faster" doing what? Assembly is faster, Go isn't. Ok, let's use assembly then.
This makes big difference. With pure Go I can do computation heavy things, for example - I've implement some information retrieval methods in Go (indexes, stemmers, ranking and so on). I can't do the same in Erlang, because my map based inverted index, written in Erlang will be to inefficient. I can write it in C, but in my case - most of the application complexity is located in this information retrieval part. Because of that, efficiency is a big deal for me.
> You can get easy concurrency if you don't share thing between concurrency contexts.
We typically don't call that concurrency. Some resource has to be under contention, otherwise its just plain-old parallelism. In Erlang, you have to ship everything to everyone, but the resources are still technically shared...just very inefficiently.
Concurrency is a property of algorithm. Parallelism is a property of the running environment. The hope is given that you have large amount of concurrency, that concurrency would be reasonably and easily distributed over parallel execution units (CPU, sockets+select loops, goroutines, tasks, processes, separate machines, VMs etc...).
So if you have concurrency in your problem domain/algorithm ( say web page requests for example ), and, your language has reasonable ways to handle it. Abstractions like processes, tasks, etc, you can make each request spawn an isolated, lightweight Erlang process (it takes just microseconds and only a few Ks of memory). Then finally at runtime, if you have a multi-core machine or started your Erlang VM with a good number of async IO threads, there is a good chance that you'll get good parallelism too! But if you only run that bytecode on a single core machine, you might not get parallelism but concurrency will still be there.
> Some resource has to be under contention,
Why? That is just resource contention. You don't want resource contention if you can avoid it. Think of concurrent as independent units of executions.
> In Erlang, you have to ship everything to everyone, but the resources are still technically shared...just very inefficiently.
Because it keeps things (including faults, but also logical decomposition) simple. Also it maps well to real life. You have to send messages to other people, emails, phone, image. You don't directly plug a shared network back-plane into their brain. So when that crashes the whole group dies. You want to send a message to them and then continue minding your own business.
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other (stolen from wiki). Here simultaneous could mean interleaved (multi-threading on a single core) or at the same time, but the interacting with each other part is key. They have to communicate, they have to communicate what they know about the world, what they know can change...they necessarily share state whether explicit or not.
Parallelism is is actually doing stuff at the same time, usually for a performance benefit, in which case, you aren't going to be using any of these slow FP languages anyways (but there is always Java + MapReduce, which is kind of functional). Your Erlang code will run "faster" relative to single core Erlang code, but if you honestly believe that you are being parallel fast...
> Because it keeps things (including faults, but also logical decomposition) simple. Also it maps well to real life. You have to send messages to other people, emails, phone, image. You don't directly plug a shared network back-plane into their brain. So when that crashes the whole group dies. You want to send a message to them and then continue minding your own business.
I get it: back in the old days before cell phones and screen sharing, we'd have to use carrier pigeons to send messages to each other, so we just sent messages out and minded our own business. But somewhere down the line, we gained the ability to interact in real time via language, looking at and modifying the same stuff even (I think this happened 50,000 years ago or so). We've never been able to recover from these impure interactions.
Parallelism is is actually doing stuff at the same time, usually for a performance benefit, in which case, you aren't going to be using any of these slow FP languages anyways (but there is always Java + MapReduce, which is kind of functional). Your Erlang code will run "faster" relative to single core Erlang code, but if you honestly believe that you are being parallel fast...
Erlang's shared nothing architecture is wonderful for avoiding cache misses. (False Sharing) That's a big stumbling block for good parallelism on today's multicore machines. Also, each process having it's own GC helps even more, and GC is an even bigger problem. Go also makes less use of GC than other langs.
You are aware that Erlang was invented to route telephone calls, which maps very well to message passing concurrency? Erlang is after all not a pure functional programming language.
> and potentially interacting with each other (stolen from wiki). [...] but the interacting with each other part is key.
Sorry don't see it. It seems you took something that was optional and turned into "key". It is not key. potentially means they can interact but they don't have to.
It seems you conveniently misinterpreted the definition from wikipedia to fit your idea of what concurrency and parallelism is.
Erlang does not specify how the message passing semantics is achieved, in principle it could be implemented with shared memory. But then it is hard to garbage collect processes independently, take Haskell as an example. It uses shared memory concurrency (STM) and the garbage collector thread has to stop the world to collect. Message passing also allows to transparently distribute the system over several nodes.
Note that the people who espouse Erlang are espousing the Erlang VM (mainly because it doesn't have a name besides "the Erlang VM".) Nobody likes Erlang's syntax. Nobody likes Java on the JVM either, but Clojure's pretty great. Use Elixir, and you get pretty syntax, and also "string support" and "a consistent stdlib" for free.
And, since other people have already rebutted your statements about structs and hashes, I'll ignore that. †
On needing speed: for IO-bound operations (concurrent ones especially), Erlang is faster than Go. For CPU-bound operations, Erlang knows it can't beat native code--so, instead of trying to run at native-code speeds itself, Erlang just provides facilities for presenting natively-compiled code-modules as Erlang functions (NIFs), or for presenting native processes as Erlang processes (port drivers, C nodes.) If you run the Erlang VM on a machine with good IO performance, and get it to spawn the CPU-bound functions/processes/C-nodes on a separate machine with good CPU performance, you get the best of both worlds.
And finally, on "every program having to pay the costs": is someone forcing you to use Erlang to create something other than fault-tolerant systems? Learn Erlang. Learn Go. Learn a bunch of other programming languages, too. Use the right tool for the job. The article is the rebuttal to people who claim that Go replaces Erlang in Erlang's niche, not a suggestion to use Erlang outside of its niche.
---
† For a bonus, though, since nobody seems to have brought this point up: Erlang has always had mutable hash tables, even before R17's maps. Each process has exactly one--the "process dictionary." It's discouraged to use them in regular code, because, by sticking things in the process dictionary, you're basically creating the moral equivalent of thread-local global variables. However, if you dedicate a gen_server process to just manipulating its own process dictionary in response to messages, you get a "hash table server" in exactly the same way ETS tables are a "binary tree server."
Erlang's syntax is not my favorite, but as a professional programmer, it's just something you deal with. You get used to it, and in the end it's ok to work with. Over time, as a programmer, unless you live in some kind of Java silo, you are going to deal with lots of different languages and syntaxes. I've used, professionally: C, Tcl, Perl, Python, PHP, Java, Ruby, Erlang, SQL, HTML, and probably a few other things I'm forgetting. After a while, you get to the point where you can pick something up and use it and appreciate what's good about it without getting hung up on its warts, unless they are such that they really prevent you from being productive. Erlang's syntax does not fall in that category.
> Note that the people who espouse Erlang are espousing the Erlang VM (mainly because it doesn't have a name besides "the Erlang VM".) Nobody likes Erlang's syntax.
I think lots of people like Erlang's syntax and find it well adapted to what Erlang strives to do. There are certainly people who don't, but like its features, but I don't think its at all true that praise for Erlang is just praise for BEAM (and, yes, the current Erlang VM does have a name, as does its predecessor, JAM).
I mean, Erlang's supporters have made Erlang-to-C and Erlang-to-Scheme compilers, and the main distribution includes a native code compiler (HiPE), so its pretty clear that its supporters don't think that the VM is the only good thing about Erlang.
I like the erlang syntax, and think its pretty easy to reason about honestly.
The only thing thats goofy is the ,;. endings but I just stopped thinking about them after a few months of writting erlang and it became second nature just like everything else.
It's not really "BEAM on the JVM", it's actually a translator of BEAM compiled code to JVM bytecode. But the concept is similar, the target for Erlang is important, but so far a variety of things have been used. JAM was the original VM, later BEAM, later BEAM with HiPE. See [1] for some more information. There was also, apparently, an Erlang-to-Scheme translator. It'd be interesting (does this exist yet?) to see someone implement Erlang semantics in Racket.
While it's true that Erjang translates BEAM code, BEAM itself also does this to yet another more modern internal code (. I'd still consider both BEAM implementations.
Yes, though Erlang has many built-in functions that make it more than just a byte code runtime. See Erjang with the j for an example of a BEAM written on the JVM.
One thing that complicates the construction of a new compatible virtual machine is that the beam files are actually pre-processed at load time into another internal format that is then executed by the vm. Nevertheless see for example http://erlangonxen.org/ they have build a new vm running on top of xen.
> * Go has a great, consistent, modern, bug-free standard library. Erlang - not so much.
Erlang has one of the most battle-tested standard libraries around. OTP is rock solid. It may not be bug free (I don't know that any language can claim a bug free standard library), but it's damn close.
It's pretty obvious you haven't spent much time at all with Erlang based on your points here:
- Claiming "X is fast, Y isn't" is not even an argument, you should've just left that out.
- Arguments about syntax are rather pointless, but Erlang has very consistent syntax, and it's small. You can pick up Erlang in a couple of hours if you are familiar with FP concepts.
- I haven't encountered any real problems working with strings in Erlang. It may not have a bunch of standard library functions for manipulating them, but it's pretty trivial to do most things you would in any other language. It makes up for it with how much of a breeze it is to work with binary data.
- As mentioned previously, structs aren't datastructures, and Erlang has an equivalent (records) for those anyway. Erlang has trees, maps, tuples, lists - I would consider those a lot more obvious and necessary.
- Erlang and Go are both general purpose programming languages. They don't share the same design goals though. You could write a Go program to do a poor approximation of what Erlang is good at, and vice versa, the point though is to use the proper tool depending on the application. I don't know where you got the idea that fault tolerance isn't important for "90% of software", but the software I work on certainly requires it.
- Your argument about shared memory makes it clear you haven't actually used Erlang. The copying of values between processes is abstracted away from you entirely, there simply is no awkwardness. Perhaps there is in Go.
You are claiming the article is biased, but your post is riddled with it. There are certainly problems with Erlang, but none of the things you list are one of them (except perhaps strings).
To be fair, the standard libraries do have some unexpected inconsistencies. The array module indexing at 0, whereas everything else starts at 1, setelement being (index, record, value), but the similar feeling dict/orddict:store being (key, value, dict) (so the value and the collection items are switched between the two), things like that. Nothing really major, but a few things that mean you end up looking at the docs or autocomplete now and again because you forget argument ordering.
Erlang strings are quite inefficient if you aren't careful, and their printing is just terrible; a lot of the Erlang community uses binaries where possible instead, since those behave more like you would expect and are generally faster (especially since concatenation can be achieved with io_lists). It's a fair point; most people trying Erlang assume strings are a basic data type (since they were in whatever language they're coming from), they don't know to use binaries instead wherever possible, and so as soon as they see how slow they are, or when they get a non-printable character in it and the entire string prints as a list of integers, it's rather offputting.
The way I see it, arrays and binaries start a 0 because they represent an offset. Others start at 1 because they represent a position (first, second, third, ...) instead.
For the function and argument orders, there's no explanation for that one.
Erlang strings are a whole other subject, for which I recommend you give an eye to this blog post for the rationale behind a lot of their behavior: https://medium.com/functional-erlang/7588daad8f05 . It's a decent read on the topic.
Definitely good points. Your first is one of those really small, but also really annoying things about Erlang. I spend a lot more time with Elixir than Erlang, and while the first one is addressed, and the the second one is mostly covered, printing is still a pain point for people new to the language. Once you understand the caveats, it becomes a non-issue, but it's certainly frustrating for new users of both languages.
> Go has shared memory. Yes, that's a feature. It allows things to go fast. Purity of not sharing state between threads sounds good in theory until you need concurrency and get bitten by the cost of awkwardness of having to copy values between concurrent processes
There are approaches that allow the flexibility of shared state without the possibility of lurking data races or, worse (in Go's case) lack of memory safety. Even JavaScript has such a solution now (Transferable Objects). In fact, Erlang itself has one such approach: ETS.
To be honest, I don't think unrestricted shared state is the right thing in a programming language. It just invites too many bugs (and race detectors don't catch enough of them).
I don't know that much about Erlang, or Go for that matter, but there are a couple of things that seem to be at issue:
* From what I can tell the times I've looked at it, Erlang's syntax is pretty standard if you're used to functional programming. Sure, if your background is in C and Java, you might have a hard time picking up Erlang syntax, but if you have experience with Haskell or ML, it will be nothing new, except for (that I can tell) its syntax for bit-level operations.
* Structs are not data structures. They are a way to represent objects. Erlang has these as well, in the form of Records, which as far as I can tell are pretty much exactly the same thing as structs.
* Hash tables are only really useful with mutable data, which has its own set of issues and which Erlang does not have (much). Erlang does have maps, which act much the same way but are immutable.
I think it really comes down to that Erlang was designed for a restricted set of uses, which makes it really excel there but seems to hamper it in other areas.
> Hash tables are only really useful with mutable data
But maps (arbitrary k:v associative arrays) are not, they're useful in all sorts of contexts. And r17 adds EEP43 maps[0] to Erlang (though IIRC only with constant keys at this point)
In Go, iteration order of keys was always non-deterministic.
At some point it was changed to be also random i.e. iterating the same hash the second time would produce a different sequence of key/values than the first time.
This is exactly what the article says, except using more words.
Listen. Order of key iteration is well understood attribute of a Hash Map, and it's always non-deterministic. That's what I have said, and I am not missing the point. The poster tries to imply an insertion order iteration in hash maps that has never existed.
My second, arguably more interesting point, is that the poster's own code violates the premise of his post.
Order of key iteration in a Hash Map is not non-deterministic, unless the hash is using a random salt. Unless you are using a random salt for each Hash Map, hashes can be precomputed and will always be the same for the same input of keys, therefore deterministic even though it may look non-deterministic in nature.
On the other hand, I agree that the author of the article, does not seem to understand the underlying data structures. Either that, or the way he has written the article portrays lack of understanding.
It is quite possible that pre 1.0 they may not have been using a hash map at all, and instead they were using an ordered map, which would have given an insertion order iteration.
Although note this is pure speculation as I have not tested this on pre 1.0, and in fact you may be absolutely correct that he is implying an iteration order that never existed.
What should be noted though is this:
https://code.google.com/p/go/issues/detail?id=6719.
It looks like that as of Go-1.3, there will be a sort of semi-random iteration, where while iterating each bucket in the hash map will be iterated over in increasing or decreasing order, chosen at random. Which is good as it is not too much of a performance hit, with the benefit that the iteration order will be non-deterministic for small maps, which is currently not the case.
EDIT:
Here is an explanation of iteration over Maps in Go <1.3 and >1.3:
Map iteration previously started from a random bucket, but walked each
bucket from the beginning. Now, iteration always starts from the first
bucket and walks each bucket starting at a random offset. For
performance, the random offset is selected at the start of iteration
and reused for each bucket.
interface{} is nothing like void* because it's type-safe. interface{} is (type, value) i.e. it remembers the type of the value it contains.
void * is just a value.
In C a cast is unsafe because it doesn't tell you if you did something wrong.
In Go, type assertion is safe. It tells you if it succeeded and if you ignore it, it'll panic, informing you that you have tried to perform an illegal operation.
I was partially incorrect. I said "reflection" could look it up, but you can check it during the conversion as you said (from Effective Go[0]). But my original point (before the edit) still holds true. You need to convert any data stored in a common data structure implementations and you don't easily know its type. So you have to track the type of the data in that structure yourself. So if you pass around a tree, linked list, or whatever, it's not clear what the type is of the stored data by the type information.
But you are right, I'm new enough to Go to forget about the 'ok' check on the type conversion. Checking errors on any unsafe operations is really the best way to go. If you are unsure of the type of a structure you are getting, you really should be checking it.
"Security" is an invisible quality, by which I mean it cannot be easily observed and because of that it cannot be easily compared and because of that is not going to drive adoption.
This is in contrast to visible qualities: price, performance, availability of the source code and its licensing terms, size of the ecosystem (number of applications for the OS, number of books, articles, conferences, programmers who know how to program for it) etc.
How exactly will you demonstrate that Ethos is more secure than, say, OpenBSD?
I think it's more about the possibility of guarantees.
OpenBSD has un-typed IO. Typed IO gives you guarantees that un-typed IO can never give you. For starters, a number that doesn't validate properly as an Int, for instance, will simply not be able to pass through, potentially stopping if not Heartbleed then bugs like Heartbleed.
Don't you think companies and other interests would like stronger guarantees, especially when they're running applications that protect information that hackers and foreign governments and other companies would love to see?
Until it becomes difficult to work with and is perceived by someone as slowing them down, at which point someone will come up with the bright idea of typing the io channel to a suitable type for layering an untyped stream over.
This is the reason why we believe the Tao--the way--is essential to an OS. It is the programming paradigms and use, combined with OS semantics, which is the genius of UNIX.
Where does the IO typing come from? Is it some programming language? The website says that it uses C for kernel and Go for user space, neither of which are known for having advanced typing systems.
I don't think number of applications is a big deal when it comes to stuff like this. As long as it has a secure network stack and implementations of various servers for core internet infrastructure it's good enough for me. Now if you were talking about consumer grade operating systems then it would matter.
Agreed- a new, more secure OS will need other good qualities to actually market itself on.
One idea that could improve both security and the ecosystem would be a capability based design. Separating components through standard protocols/interfaces could enable something like current mobile permissions to be backed by different implementations (including virtualized/sandboxed ones), in some cases swapped out by users like commands in a shell pipeline.
I haven't seen much work in this direction; does anybody think this would or wouldn't work?
It's not always invisible when your computer or phone gets pwn'd and your email account starts sending spam, or your identity gets stolen. I think as the world becomes more technically literate, and insecure systems proliferate, security will become more and more visible. I would at least expect it to be the next competitive battlefield once usability starts settling down (as everyone figures out what does and doesn't work).
It can be partially observed by looking at the amount people in-the-know (the developers, insurance underwriters, auditors, ...) are willing to bet on the security.
The reason security will drive the adoption of a new OS is that little else will drive people away the current ones. In a future where our current architecture is being constantly exploited, then security will finally matter enough to drive us to something new.
Here's why I don't agree. This is a real conversation I had with someone about Heart Bleed:
N: So will I have to change all my passwords?
ME: Yes, you should.
N: That's a lot of work.
ME: Yes, but if you don't someone is likely to break into at least some of your accounts. At least make sure you've changed the password to your mail account, and set up two factor auth [very simplified explanation of what two factor auth involved], and check that all accounts you care about use that mail account for password recovery.
N: I'm not sure if I can be bothered.
This is a relatively technically experienced user.
It fits with other experience I've had, that security is perceived as a hassle until it's too late and then users do the bare minimum, even in the face of ongoing threats.
Corporate users might help drive adoption, but only if the cost and hassle is limited enough, and the damage of not going there is high enough.
The future will tell about Ethos' success. But I think the earlier adopters will be the tech savvy community that wants security & privacy and buys into Ethos' programming model.
Pointing out the obvious: engineers are not "goods" that Google or Apple produces.
Even if consider employees salary to fall under that dubious categorization, good luck proving the salaries were "likely depressed" when during that period the salaries went up. A lot.
It seems that it only affected lazy and apathetic engineers i.e. those who couldn't be bother to send out a resume.
Uh, goods are not just things companies produce, but also things they consume. Therefore it could also include employees.
Also, if you have a look at the class action filing, the mechanism of action is quite clear.
To lay it out, the way that salaries are set is for any given title (eg: Software Engineer I), there is a pay range. Hiring managers do not have the general permission to go outside the pay band. Under the interests of fairness, these pay bands are adhered to fairly strictly.
So depressing the wages of even a sub-section of the employee pool helps keep the pay band down.
While you may deem this a mere 'theory', there is evidence of it's affects. Specifically that Google was forced to give 10% raises to their entire employee when Facebook would not accede to a cold-call prohibition.
Your final sentence is not very complementary to your fellow engineers - often times engineers are focused on the problem and unaware of their place in the market. To call them lazy and apathetic is pretty mean spirited.
Or loyal and happy ones, which believed the company was paying them market rates.
In any case, if the companies formed a price fixing cartel, they're just as wrong if they failed to have an effect as if they did have an effect - the damage is just smaller if they didn't have an effect.
Do you seriously mean 'good luck proving the salaries were "likely depressed" when during that period the salaries went up' when there were non-recruiting agreements between the top payers? How do you think salaries increase?
It would be shockingly unlikely if salaries were not depressed, and that is what's actually obvious.
Under US antitrust law, if you conspire with others in the marketplace to fix prices, you can be prosecuted. Doesn't matter whether you're fixing prices for labor, pizza, cats, photos of cats.
If you have more links, I'll be happy to add them.
I have a much longer list of companies that use Go (https://quicknotes.io/n/1XB0-companies-using-go) but it's harder to find descriptions of rewrites than find out who's using Go.