I’m a meat eater and want to replace meat. It’s not just vegetarians. The factory farming of animals doesn’t sit well ethically with me, but meat has been a staple of my diet my entire life. If there were a viable plant based alternative that’s indistinguishable, factory farming can die out.
It’s not ready yet, and phase 1 was the really weird stuff. I think Beyond Meat is like phase 2 of “pretty close”. In the near future, you won’t be able to tell the difference
> Software engineering is uniquely multiplicative because it's possible for a software engineer to create software that does software engineering.
I think you misunderstood the O-ring explanation by citing that idea as an example. It doesn't mean building AI. It applies to any of those fields you mentioned. Good people in field X work together with other good people in field X in top firms -> multiplicative productivity.
Notice that none of these images are zooming in on pixels to get the comparison. The fight for quality among high end DSLR's is nearing the description of unnoticeable difference, but phone cameras are benefitting immensely from new tech.
In fact, it's probably the area where we see the most gain compared to any other improvements in iPhone versions.
Perhaps your feeling is from the fact that new phones are released in rapid succession. If that's the case, just compare quality in phones a year or two apart instead of a few months.
Biggest game in DSLR world right now is low light. The new D850 is pretty crazy in that respect, with a full stop of ISO reduction. No phone camera on this earth is going to help me take better photos in low light restaurants(which I need for a side business).
> Biggest game in DSLR world right now is low light
As an a7Sii owner I couldn't agree more (the D850 looks great too). Phone cameras are going to have a hard time catching up there. There's only so much light and at some point, sensor size matters.
In my GP comment I mentioned being amazed by my friend's iphone 7+ pics, and it's true, I was. But when the sun went down, the a7Sii ruled over everything and captured every good picture from then on. There's a long way to go for the mobiles to be able to compete at night. Nice to know our investments aren't totally useless :P
Is low light something that a lens attachment over the iPhone camera could help with? There's a few lenses out there that are supposed to be quite good (like the Moment ones) but I don't know if they help out with that. It would be interesting if they kept the FOV the same and just provided more light.
Well, you could collect a huge amount of light and send it through to the phone sensor; that would have the same effect. But you're talking a very big lens. The pro DSLRs have a 35mm sensor - that's ~865mm². The iphone's is what, 60mm²? Thereabouts?
No matter how you cut it, that's generously 1/10th the sensing area/capacity (same thing). And it's not like the sensor tech in the iPhone will be 10x more efficient than contemporary DSLRs - iPhone sensors are made by sony, who will use the same tech in their own cameras. To get comparable light cell for cell into that you'll need a lens with 10x more light ingress, and sophisticated optics to focus it precisely down to a tiny half-square-centimetre. This lens would be larger, heavier and more expensive than the phone itself. Think "large can of tomatoes" size.
Not saying it's impossible. Mobile phone companies have pulled off some amazing advances and I have no idea what tricks might be up their sleeves. But for now, no mobile even touches full frame DSLRs in low light, and it doesn't seem like an easy hill to climb.
Actually there is an example in the article, comparing pixel-level detail from the last 4 iPhones in a low-light scenario (“old man” picture). The improvement with the 8 is impressive to say the least.
Interestingly, the article points it out how far cellphone cameras have come by comparing shots from just various versions of iPhones and you can see how much better the quality has been getting even just from Apple.
This also opens up the door to more secure payments. The "basic card" handler is one where the browser might just store your raw card number. But you can imagine that your bank might implement a payment handler that requires you to enter a 2fa code before using your card.
When it comes down to it, a credit card number is kind of serving double as a username and password for online payments right now. This new API has the potential to significantly improve upon the security of raw PANs.
Interesting that the post says the term BIN (vs. IIN) isn't commonly used anymore, but it's practically the only thing I've heard them called in the last 10+ years of ecomm dev.
I've seen both. When I wrote the post all the docs from the gateways I had access to called then iin with a historical note on bin. I done have access to those docs anymore though.
I've seen both. When I wrote the post all the docs from the gateways I had access to called it the iin with a historical note on bin. I don't have access to those docs anymore, though.
Edit: I really need to check when posting from my phone
I've been building for the web for 15 years, and Aurelia feels more natural to me. Granted, my Angular projects were all in Angular 1 and it has since undergone a massive rewrite, almost as if it's a different framework now. But based on my extensive experience with web applications using Java, Ruby and Node.js on the servers (sometimes Single Page Application frontends, sometimes minimals JS with jQuery libraries, etc), Aurelia just felt more naturally web-like. That doesn't mean Angular is not able to do the job, but it feels more like using a heavier tool that doesn't belong.
Aurelia also feels easier to build with (less unnecessary boilerplate) and more maintainable in the end. The community is small but has made immense strides in a short amount of time. There is a company backing it, but a very small one, not a giant one such as Google or Facebook (and those giant ones make no guarantees, but the small one backing Aurelia has emphasized their long-term commitment time and time again). They have an ever-increasing number of community contributors and features have been added at a rapid pace. Since I began a year and a half ago, UI toolkits have regularly added Aurelia integrations. The CLI has improved by leaps and bounds.
You can find out more on their website and other blogs and articles around the web, but you may find this helpful:
It seems to be a mistake on the author's part to allow himself to be exploited at first (working without having any of the promises in legal writing). It's unethical but not uncommon and alone wouldn't produce the outrage for an article detailing it.
The missing piece is why he doesn't feel pressing criminal charges and suing for IP is an option. The language seems to imply that the answer is for his physical safety. If he's writing this article because he feels safe enough to, then maybe he thinks battling in court requires people to be present and closes the distance safety net.
He's able to list pretty concretely his contributions but is suspiciously vague about the cause of his fear. I'm inclined to believe that it's because he might not think the public would unequivocally agree his fear is sound.
From what I've gathered from some people working in the blockchain community, some of the altcoins that are geared towards being a ponzi schemes have investors with Russian oligarch/mafia or similarly shady backgrounds. I wouldn't be surprised if that isn't completely contained to the obious scams.
You could point to frameworks which have had majority usage at any given point and say those were the "right" ways at the time. But ideas evolve, which isn't so ridiculous.
The correct analogy would be "There are 14 competing standards with a new one released each week that creates doubt about the ongoing use of existing established standards and know one knows what the fuck is going on".
Do you want an open ecosystem or a controlled one? There is no in between, and you will hate it no matter what you choose. On the Open ecosystem side of things you have things like Linux with 10,000 independent distributions doing slightly different things. On the controlled you have Microsoft, Apple, and Oracle telling developers the "right" way to do things.
Not sure about in general, but it seems like their residency program [0] gives a good shot at landing on the team afterwards and is accessible to "normal" people passionate about ML
Programming (expected): intermediate Python programming skills: work effectively with loops, control flows, data structures, files, functions and OO programming. Prior experience with PyData libraries is also recommended (e.g. Numpy, Pandas, Matplotlib)Mathematics (recommended): Matrix vector operations and notation.
Machine Learning (recommended): understand how to frame a machine learning problem including how data is represented, how models are evaluated on the task and against each other, and how to optimize model performance for the best evaluation."
As somebody who's recently starting to learn more about ML, a lot of the work of an ML engineer does seem to be automate-able (not doing research or pushing boundaries but just applying ML to some product need). For example, choosing hyperparameters, evaluating which features to collect, etc seem to be things that can be automated with very little human input.
His slide on "learning to learn" has a goal of removing the ML expert in the equation. Can somebody who's more of an expert in the field comment on how plausible it is? Specifically, in the near future, will we only need ML people who do research, due to the application being so trivial to do once automated?
There is one job that is still difficult for a machine to do well (although machines are improving): feature engineering.
ML works very well in bounded/closed domains like image and sound recognition. Open-domains are much more challenging.
Building predictive models from data in specialized domains often require insight, which machines cannot provide. For instance, let's say you collect a bunch of data and are trying to predict sales. You need to apply domain knowledge, experience and intuition to know what variables are causal or correlative. If you just throw all the variables into the mix and build a model from that, you will end up with a model that overfits badly.
There are automated "variable selection" techniques that can help to prevent overfit, but they are mostly imperfect because machines can only detect correlation and not causation. Also, many regression/classification techniques are easily fooled by noise and highly nonlinear relationships. We did some work a few years ago comparing predictive models built from a ton of sensor data (with automated variable selection) vs. one that was parsimonious that was built on select data that we knew accounted for 80% of the effect. The latter model was far superior. Noise/non-causal variables often don't just "wash out" even with very good variable selection algorithms.
It takes domain knowledge to figure out what variables matter and what variables don't.
You will still need data engineers to build the whole data ingestion and processing pipeline (although that can be easy if standardised tools are available, such as spark, it's still a challenge in many cases).
Right, but I'd consider that falling closer to the realm of general software engineering -- similar to tasks of collecting analytics of users or building infrastructure to get data from point A to point B.
Maybe that currently is some parts of the job of an ML engineer. But if that's the only part, I don't think that role should be called one of ML engineer anymore
I am working on solving this problem at the moment - I'm building a product that lets anyone build the ETL pipelines that produce inputs for a ML model. If anyone's interested in beta access (coming month or two) let me know, davedx@gmail.com
It’s not ready yet, and phase 1 was the really weird stuff. I think Beyond Meat is like phase 2 of “pretty close”. In the near future, you won’t be able to tell the difference