I was starting to get worried about the data collection already happening individually at these big companies. Now that they have announced a partnership which potentially combines all this data together, I feel so much better!
The funny thing is, the companies are ALWAYS going to put a positive spin on this. Not very different from the WhatsApp "we won't show ads, ever" messaging. Now I am in the camp which says "Fool me once, shame on you. Fool me twice, shame on me". Almost none of these companies can be trusted at this point. [1] Their refusal to ask OpenAI to be at the table really does not reflect well on them [2]. And the less said about the tenured professors who are now becoming company mouthpieces saying things like "we create products which cannot make profit but which is meant purely for data collection" the better [3]. And lastly, if these companies had such a sincere desire to "improve AI for the sake of humanity", how about they start by letting OpenAI (or a similar company) do a data audit of all the information they share so that we can actually be certain it is not just a data brokerage masquerading as a public service?
I wanted to say that I wish the AI community will boycott this effort completely. I find it a bit worrying that this community now resides almost entirely within the walls of corporate America.
[1] Interestingly, the only company which is even making noises about user privacy is Apple. Is it possible they saw something in this partnership that they didn't like?
Modern machine learning depends heavily on data. If you have 100x the data, your algorithm is going to outperform a superior algorithm in most cases, because the data is the source of the intelligence.
It's a big, scary barrier for startups, for innovation, and we're running the risk of creating data superpowers that can't be competed with. Some algorithms don't perform well until they have a threshold of data. When that threshold is attainable only by a company with XXX million users, you inherently get a system where only those companies can innovate on the most cutting edge in machine learning.
That's a bad thing for us, especially if we find those companies turning against us. Did Google start putting a liberal spin on search results to try to influence the election? We don't know! But we do know that they are using their position to manipulate ISIS recruitment, which means they aren't afraid to get their hands dirty with this kind of stuff. THERE IS NO OVERSIGHT! And you aren't going to get a search engine that can come close to competing with Google unless you can find a way to match their data volume before you have something competitive. It's a massive barrier.
Facebook is enjoying the same sort of entrenchment. I've switched away from Facebook, and now I've lost touch with a lot of friends. They are my friends, why can't I export them? Why can't I export the data that belongs to me, and use it in some open-source format that allows me to reap the benefits of all the work that I spent tracking them down inside of Facebook's private garden?
As a society, we need to start addressing these things, or we are going to end up with Internet giants that we don't have the means to get rid of, and that aren't at risk of being out-competed.
> But we do know that they are using their position to manipulate ISIS recruitment, which means they aren't afraid to get their hands dirty with this kind of stuff
To be fair, Google is not cooking the search results, a company under their wing(jigsaw, formerly ideas) is buying ads on search terms related to terrorism. How good a buy deal they get is one thing to consider and I heard there was a grant of Adwords money to that company for this so there is that aspect as well. Buying ads is a whole lot different in my mind than changing the search result algorithm bias. The latter to the two would be a game changing thing.
Might be controversial, but I pretty strongly believe that buying ads is in the same category of manipulation as altering search results, with perhaps the caveat that ads which declare themselves as ads are maybe not nearly as bad.
Ads manipulate the flow of information going to a person for the benefit of the advertiser. It's taking advantage of a vulnerable person in hopes that they will change their behavior - whether that change in behavior is beneficial is irrelevant.
Should there be? I'm not quite sure. I mean, personally I feel a lot safer managing my own data privacy with what I give these companies, rather than having a government do it for me. And I'm not exactly a libertarian, I just think it's rational that this should be decentralized, the risk of a company turning against "us" is a lot less than a government, given history.
"I've switched away from Facebook, and now I've lost touch with a lot of friends. They are my friends, why can't I export them? Why can't I export the data that belongs to me, and use it in some open-source format that allows me to reap the benefits of all the work that I spent tracking them down inside of Facebook's private garden?"
All those "What does your friend list say about your love life?" type surveys use it.
"As a society, we need to start addressing these things, or we are going to end up with Internet giants that we don't have the means to get rid of, and that aren't at risk of being out-competed."
We've heard this a lot over the years, but I've really never seen a case of a company that can't be out-competed. Who is it up to to "get rid" of an organization? On what basis? People stop using their services, and they stop making money is generally how it works. Otherwise, Antitrust law is the main mechanism today, isn't that sufficient?
People forget that Google didn't ask permission or get oversight or contact copyright holders when building a search engine. They hooked a couple servers on the web and started crawling it with their algorithm, and replaced other search engines pretty quickly because they had a better service. One could say that it's impossible to compete with Google, but I'm not so sure: Bing is doing fine (if small share), DuckDuckGo is growing, etc. Adding more oversight/regs to the Web just makes it harder to replace these companies.
I'm sure plenty wanted to "get rid" of Microsoft in the 90s because of their attitude and growing monopoly, and certainly they were fettered by the U.S DOJ., but looking back, the calls to break them up were a bit silly. No one would have predicted Apple was going to become much, much bigger (some still can't believe it).
I just think it's rational that this should be decentralized, the risk of a company turning against "us" is a lot less than a government, given history.
Last time I checked, Google search had 98% market share in Germany, Android had 78% market share worldwide. I don't see how you can talk about decentralisation given dominance like that.
Whether or not a de-facto monopoly is a bad thing can be argued, I guess. (I personally think it is)
But if one is present, I don't see why handing the power to a government that is (in theory) regularly elected and bound to basic rules of conduct and transparency is worse than handing the same power to a private entity which is bound to nothing except the expectation to grow and benefit their share holders.
"Last time I checked, Google search had 98% market share in Germany, Android had 78% market share worldwide. I don't see how you can talk about decentralisation given dominance like that."
They're not legislated as the Official Search Engine of the EU. Microsoft too once had 98% market share. As did IBM.
Also, think of China, where Google has no market share. Baidu will grow and bide its time.
"But if one is present, I don't see why handing the power to a government that is (in theory) regularly elected and bound to basic rules of conduct and transparency is worse than handing the same power to a private entity which is bound to nothing except the expectation to grow and benefit their share holders."
If you want to entrench the western world to assume Google is a permanent fixture, you write laws and regulations that make that assumption and turn it into a publicly transparent utility. This is how Theodore Vail turned AT&T into a monopoly for nearly 80 years in the USA. It wasn't necessarily "bad", at least for the first 50 years, it consolidated and standardized service, and provided transparency. But AT&T eventually turned against us by blocking future competition, and it took 15 years of fights to break it up. Plus its "transparency" also developed the presumption of letting the government have access to its data carte blanche to spy on its citizens. Personally, I'd rather not have an organization possibly run by Donald Trump know too much about me.
We could also break them up, or put anti-trust fetters on them, but that requires serious scrutiny before doing, as it might lead to massively unintended consequences.
"Until facebook decides to shut down this api - which is at their sole discretion."
Unlikely, given that API is the only reason anyone pays them money.
They're not legislated as the Official Search Engine of the EU. Microsoft too once had 98% market share. As did IBM.
And then what exactly happened? Did competitors beat them on their own market because of superior performance or better regards for the interests of users? Nope. They discovered new markets and solved new problems which made the old markets less relevant. It's not clear to me that this is a process that you can trust will work in the future.
* Personally, I'd rather not have an organization possibly run by Donald Trump know too much about me.*
You mean like Trump Ventures which owns stock of Google, Apple, AT&T, and Tesla (among many others)?
"Until facebook decides to shut down this api - which is at their sole discretion."
Unlikely, given that API is the only reason anyone pays them money.
That argument makes no sense. I as a user don't benefit at all if some analytics company has a contract with Facebook that allows them to access my data.
If users should realistically be able to migrate to another service, they need free and automatic access to their data - which Facebook is in no way forced to provide.
"That argument makes no sense. I as a user don't benefit at all if some analytics company has a contract with Facebook that allows them to access my data. If users should realistically be able to migrate to another service, they need free and automatic access to their data - which Facebook is in no way forced to provide."
That's not how it works.
You have rights and easy access to your data - always have. 3rd parties don't get my PII unless I let them. Heck, I can download a ZIP file of everything I've ever uploaded with a single click! It makes zero sense they would prevent you from getting at it, if they did, they'd be breaking their own value prop of a place to share things, which by definition means people need to be able to get at it easily.
Again, this stinks of non-customers trying to interfere with Facebook customers for dubious reasons out of fear.
You have rights and easy access to your data - always have.
And who would enforce such a right if we have no governmental oversight?
If they did, they'd be breaking their own value prop of a place to share things, which by definition means people need to be able to get at it easily.
Facebook's value proposition is people sharing things within the platform. You can perfectly well provide that service and at the same time make it hard for users to migrate their data to a different platform. They already did it in the past. [1]
Again, this stinks of non-customers trying to interfere with Facebook customers for dubious reasons out of fear.
Indeed. In this case, the non-customers are Facebook's users and I believe the fear of entirely justified.
Fundamentally we are being cornered, marked and forced to buy their shit whatever and whenever we need something. We can't do much as individuals but we can expect different syndicates wanting slices from the same pie: China and their soft power government, Russian gov trolls for destabilisation, etc. This makes me think the US governement is well aware and happy with this development, from their geopolitics perspective. It is not liberals vs socialists, it is my shit vs yours to feed 8 billion consumers.
You would want to support your local communities / markets maybe, now risking to become invisible or outpowered? It is called biodiversity. Very sad and dangerous for evolution a fully Googled or a fully Amazoned or a whatever monopolised world. Also dangerous: fascism and communism are made of the same metal and come from monopoly of minds and costumes.
"You would want to support your local communities / markets maybe, now risking to become invisible or outpowered? "
With Amazon, sure, there's a point, but that battle was lost 30 years ago with Wal-Mart when U.S. consumers decided they didn't give a shit about their local community, they wanted "low prices". Not everywhere, mind you.
"Very sad and dangerous for evolution a fully Googled or a fully Amazoned or a whatever monopolised world."
I'm not sure I agree. Where's the monopoly? High market share isn't the only criteria. It has to be a case where consumers have little to no choice, which is clearly not true today. Plenty of people avoid Google (I try as much as I can). Amazon also isn't the world's only retailer or cloud computing company by a long shot.
These things might be true tomorrow, but future crime ala Minority Report isn't actually a thing. I'm also not sure it makes sense to give corporate welfare to losers that will waste the capital, when these companies are actually giving customers good service (hence their high market share) and being great at technology (the barrier to entry is about knowledge).
Also, given the number of negative articles about Amazon by current and former employees lately, I wouldn't bet on their braintrust staying in one place.
"Also dangerous: fascism and communism are made of the same metal and come from monopoly of minds and costumes."
I think that equating global corporations with fascism and communism dilutes those terms too much. We are a global society of organizations, it is pluralistic, and it is relatively new. I'm not fond of unregulated global capitalism, but at the same time I think it's much riskier to be ham-handed with laws and regs because someone thinks they know better, unless there's a lot of peer-verified research on the issue.
In principle we are going towards some sort of consolidation aka biodiversity annihilation worldwide, so that there will be 5-7 corporations / governments deciding for we all. It would be good if, say, they had a global superior mission (for example: space colonisation). It is not good if they only act for their own shareholders or bureaucrats (who are they? what they want from us if not enslavement and control?). You don't need fascism or communism in the given forms, it is the principle from which they arise that is emerging from these global consolidations again.
"In principle we are going towards some sort of consolidation aka biodiversity annihilation worldwide, so that there will be 5-7 corporations / governments deciding for we all."
I see very limited evidence of this.
"It is not good if they only act for their own shareholders or bureaucrats (who are they? what they want from us if not enslavement and control?)"
Most shareholders are you and your neighbours. I.e. mutual and pension funds. As an example, nearly all Canadians are all shareholders in various global companies via the CPP, one of the world's largest investors. My mother was an Ontario school teacher and is thus a shareholder in many companies, administered by one of the world's largest institutional investors (OTPP).
Most people who own equity don't want enslavement or control, they just want a return on the money they put in over their lives. Some senior management OTOH do want control/power, and enrich themselves in the process, and top management pay/power remains an open problem.
"You don't need fascism or communism in the given forms, it is the principle from which they arise that is emerging from these global consolidations again."
Again, I don't see this, sorry. The world is not consolidating, it is breaking apart, from my view. And Monopolies are always temporary at best (a few decades).
I don't think you need oversight if you have a fair playing field for competitors. If switching from Facebook to... that other social network (?.. that's exactly the problem) was as easy as switching which restaurant you eat at, I wouldn't be calling for regulation I would be content using the social network that I thought was good for me. But I can't, because nothing comes close to Facebook.
I feel that if we had broken up Microsoft in the 90's, we'd be in a much better place today. We're still churning out professionals by the millions who are fully dependent on proprietary software to make a living, when nearly equivalent free alternatives do exist for many of them (Office, CAD, Adobe are the biggest). It makes me sad, and Microsoft had a huge hand in that.
For an example of how oversight do nothing for the end user take a look at the Brazilian model of state regulation "agencies" and how they get influenced by the bigger companies.
> I mean, personally I feel a lot safer managing my own data privacy with what I give these companies, rather than having a government do it for me
What a fallacious statement. How is government oversight of these companies going to prevent you from applying your own magical 'data privacy management' spells?
"How is government oversight of these companies going to prevent you from applying your own magical 'data privacy management' spells?"
That depends on what is being overseen by the government.
I have no issue with EU-style or Canadian-style privacy regs for PII. I think that encourages local innovation by ensuring data can stay within a legal geographical territory.
Where I take issue is "AI is scary, please stop these companies from doing things" and by implication thinking that a government (a) is an organization I trust more than the company with what should be done with my data (a massive leap), or (b) provides value by preventing/limiting that organization from offering a services I want based on AI because non-customers somehow think it's scary or might lead to some nebulous monopoly power.
I'm not saying no government oversight is every warranted ever, I'm saying that there's a lot of fear-based desire in this thread for "government save us from Zuck!!" which isn't helpful when people don't even know what specific regulation they're looking for and what the systemic consequences of that would be.
> Where I take issue is "AI is scary, please stop these companies from doing things"
That's my issue with people like Elon Musk & Sam Altman starting OpenAI, where "Many of the employees and board members are motivated by concerns about existential risk from artificial general intelligence."[1] and also people like Stephen Hawking raising concern about AI of the "Skynet is coming, panic!" style, (he also warned not to respond to aliens a few days ago), it just seems like a lot of the rhetoric is about a state of AI we're most likely 500+ years away from, while it may actually slow down useful research that will lead us there faster...
If you lost touch with people because you don't visit certain pieces of HTML code anymore, then they're not your friends. Facebook has skewed the notion of what a "friend" actually is, that's the main part of keeping people behind their fence.
As someone who's left Facebook, I respectfully disagree. Without a backup plan in place to keep in touch, quitting FB was akin to losing a large chunk of my address book. Facebook was simply my only means of communication for a significant portion of my acquaintances. It's not about some philosophical debate about what a "friend" is.
I just went through and unfollowed literally every person and group on the site. Then I deleted all my photos (which were very few to begin with) except a photo of an object that looks like a face but isn't me and a photo of the nyc skyline for that longe one (after downloading them all). Untagged myself in everything, deleted every status and post on my wall. Oh yea and every piece of information besides my name. Then I locked down all privacy settings.
Now I have AOL instant messenger again, except I barely use it.
To be honest, the site is so very much better that way. None of the bullshit memes (which can be rather infuriating), no one can pretend to be me by taking my account.
Soon I'll do it with Twitter which is the only other social media I'm on.
Actually the source of intelligence is not the data but the source of data.
I've seen Norvig's video myself and while I see that they get better results with brute force on more data, that does not mean that better algorithms don't exist.
In fact, I know for sure there are better algorithms (only because we have for example humans that are better than AI on certain tasks) that probably can use the available data in a more intelligent way than brute force, yielding better results in the process.
When John Green from Portland, Maine goes to Google and types in a search query for 'dirt on Mayor', I would imagine that Google is going to have an advantage over any algorithm, human or otherwise, that does not have access to things like the fact that it's John Green from Portland, does not have access to all the news coming out of Portland (at least not in the same, automated way that Google does), does not have access to Mr. Green's search history, and does not have data on the results millions of other searches related to 'dirt on mayor'.
If you're going to compete with Google as the one-stop-shop for searches, you're going to need to be able to provide search results that are consistently competitive. And I think Google has gotten good enough that you simply can't do that without access to some portion of their moutains of data. Nobody else has search history for every single American, and without that you are going to be crippled when {Random American} tries to search for something.
I've switched to Duck Duck Go for more than a year and do >=95% of my searches with them.
I observed there is a set of results that can be found in all engines but each engine has a smaller set that are not found on the others. For this variety of results I occasionally use both Google and Bing.
This variation algorithm that I do, is one that yields better results then sticking with the brute force of either one.
It is these "getting out of the box" algorithms that out compete brute force algorithms on the same ground. That means, with guided context your search results improve drastically.
Indeed, also let's not forget that we're not (and should not be) competing with computers. What we want are the results we can get together - CAD, CAE with A from Assisted/Aided etc.
You just made me think about a few issues:
1. wouldn't these five be a cute quasi monopoly? (is pentapoly a word?)
2. isn't it brilliant how "they" - including many others, like Musk - make a big fuss about AI that is mostly already open and published, while what we should worry about is data?
3. is there any hope for an open society when most of us nerds - who are supposed to be all rebellious, undergroundy, security aware etc - with a lot of power to build the future can't wait to run into the arms of these big guys? We could use more @dhh type opinions.
>Now that they have announced a partnership which potentially combines all this data together, I feel so much better!
That's not going to happen. Their data is worth too much and it would lose its value if they shared it with competitors.
>Interestingly, the only company which is even making noises about user privacy is Apple. Is it possible they saw something in this partnership that they didn't like?
Apple aren't good at developing software. Having them in on this wouldn't benefit anyone but themselves so I doubt they were even asked. They're a company that makes shiny looking things with shiny looking interfaces.
This has nothing to do with data collection or any other crazy conspiracy theories you are trying to peddle such as Open AI being intentionally excluded etc.
Hell half of your comment does not even makes any sense.
Allowing OpenAI to do "Data audit" LOL are you seriously high on drugs. Do you know what OpenAI is or does?? Nothing with Auditing!
And yet this is the most upvoted comment in this thread along with second one where people have another set of crazy theories about Apple being "magically" superior.
Can somebody actually comment/back-up the comments about how open or not this effort is?
I could see legitimate benefits from this kind of collaboration (it's ultimately about the ability to process data, not the data itself, after all) But I do also wonder if this reduces to the banal abuse of personal data that is suggested here.
So you're basically saying that this is a "Cartel on AI".
What I'd like to see is all of the consortium members sign a pledge that none of them will sell or use their AI technology for defense/military purposes (mainly looking at you Microsoft/IBM/Amazon).
I could see how 10 years down the line, the US government tries to co-opt the group to help it in its wars. This needs to be preempted now, in the same way they should've preempted calls for encryption backdoors by adopting end-to-end encryption years before and making it the status quo, before the government even gets a chance to think about it. They mostly failed in doing that, but I hope they won't fail with this.
They chose their initial partners somehow and did not include OpenAI. Maybe they thought this would make everything too transparent, maybe they see little financial benefits (those are private companies in general aiming to benefit shareholders), whatever. I see no problem with this. I wish OpenAI plays there eventually, but even now some of the partners go beyond what I would expect from a private company on fostering open AI research.
I am however worried that this would make it even easier for a government agency to get bigger and bigger datasets of personal information
Because: (1) government can ignore some protections that corporations are supposed to follow and (2) government can act on this in a way corporations cannot (use this information to argue for or issue search / seize warrants, etc.).
>>"As of today’s launch, companies like Apple, Twitter, Intel and Baidu are missing from the group. Though Apple is said to be enthusiastic about the project, their absence is still notable because the company has fallen behind in artificial intelligence when compared to its rivals — many of which are part of this new group."
It seems Apple's lack of engagement in the community [1] is really starting to hurt it. Did anyone else take away from this that the other big players are not including them at the table/considering them real competition?
I see a list of companies which are ethically compromised. Apple has publicly taken the opposite side of the fence on this one. I'm not sure they want to be associated with Facebook, Google, Amazon or Microsoft using AI to strip apart user's private behavior and reassemble it as advertising data. Would we trust a cooperative of tobacco companies funding scientific research and ethics surrounding the use and promotion of cigarettes?
Does anyone contest that Facebook's AI is going to pull apart their user's public social feeds and private FB conversations to advance their own business model?
A unification of a large group of tech companies could even be seen as an anti-OpenAI. A singleton could certainly emerge as a co-operation between multiple large corporations and/or a government. An Amazon & Google partnership is particularly terrifying considering how much computing power could be rolled over to a unified AI almost instantaneously.
We do not need an organization engaged in PR for for-profit companies attempting to placate the public while lobbying regulators while unleashing their AI on the public at large. That is what this looks like. We've seen their business ethics already. It isn't a good template moving forward.
I would be less cynical if I heard anyone else besides Apple taking a public stand on keeping personal data private and AI contained in boxes. There does need to be AI ethics and safety leaders, but Google, Facebook, Microsoft, and Amazon should be following not in the lead.
Whether right or wrong, I think it's good to keep in mind that Apple has, by far the highest quarterly profit and total asset value, compared to any of these companies - it really is a giant among giants. Alphabet certainly has an excellent future outlook (reflected in the markets), but its current financial state still pales in comparison to Apple.
Taking this into account, I figure Apple thinks twice and thrice before joining industry consortiums and partnerships that they didn't initiate themselves. This would be straightforward posturing/strategizing on their part, and probably doesn't reveal too much about the current state of their R&D in Machine Learning. It's unfortunate that things are this way, but nothing unusual from a "game theory" perspective.
I also don't think of them as a straight technology company, but more of a best of breed product company focused on a great user experience. In that sense, high risk, long term investments that don't have an immediate application don't fit their profile.
When Apple reported Q3 earnings this year, Tim Cook said that the services revenue would be "the size of a Fortune 100 company by next year."[0] Revenue was ~6 billion in Q3, up 19% from a year prior. That comes out to roughly 20 billion a year.
When I saw everyone playing Pokemon Go and buying coins and what have you, I thought of Apple taking a 30% cut of every transaction...
The service revenue really is the strongest case for being a bull. Of course this is just my own personal opinion. I own the stock and I'm long on the prospects. I also love the products. Also so much cash!
Apple has its own OS, compilers, IDE, 2+ languages, various frameworks etc. This is more than what Google, more importantly used a lot by people. But yeah it just a hardware company.
Technology is broader than just software (though I'm not implying you said it is just software). Apple is a phenomenal technology company - their hardware + software integration is unmatched and they have made advances in everything from software to hardware to supply chain optimization to manufacturing technology.
Also, they are ( or at least were) a prime example of a company that can bet on high risk, longish term investments that don't have an immediate application : e.g. their investment in chips, touch technology etc. It may not be as long term as some of the Google X stuff but I don't think it is fair to call them averse to making high risk long term investments.
I can think you can see the point I'm making: yes, Apple has a big pile of cash, a fat amount of profit and high stock value. However, Alphabet its influence and reach is far, far beyond that of Apple. Its just not as flashy & visible.
Don't agree? Let's imagine(!) two scenarios.
Scenario 1 - the US Government somehow figures out a way to force Apple to pay all its evaded taxes. Apple doesn't agree, and to make a point how important they are, killswitch everything. Results:
10% of the developed world can't use their phone anymore
10% of the developed world can't use their computer anymore
An even smaller subset can't access their e-mail
Scenario 2 - the US Government somehow figures out a way to force Google to pay all its evaded taxes. Google doesn't agree, and to make a point how important they are, killswitch everything. Results:
90% of the developed world can't use their phone anymore
90%+ (Android owners + Gmail, Gcal etc. users) of the developed world can't access their e-mail, calendars, contacts etc. anymore
??% (decently high) of companies/universities can't access their e-mail, calendars etc., because Google Apps is down
??% has trouble navigating because Google Maps goes down (Other maps are still inaccurate and incomplete outside of the US)
??% of companies experience issues by Google Maps+API being down (Uber springs to mind)
Advertising also stops working and paying out (can be switched easily though, so not as critical)
Besides that, Alphabet is also involved in a number of moonshot projects. (Level 4) Self-driving cars. immortality research, AI research, LTE balloons, etc. etc.
The funny part is that I'm typing this on a 13" Retina MBP, with a iPhone 5S in my pocket. I'm by no means a Google fanboy. I love Apple's vertical integration, the amount of synergy and seamlessness their devices have, and that they're championing individual privacy. Also doesn't hurt that compared to Linux, OS X is a godsend to dev on.
I've done enough inebriated rambling so I'm gonna shut up now..
Still gave KAT owner's IP address and all his .icloud e-mails to the FBI without any resistance. WTF?
Both Google and Apple are extremely important, valuable companies that have had a very significant impact on the world and culture.
Google created Google - they more or less are the internet (that and Facebook) for the world. To 'Google' something is a phrase that entered into the vocabulary a very long time ago. Both their enormous presence in Internet/search, plus more cultural things like Youtube makes it unable to imagine a world without them.
Apple created iPhone. Holy shit, the iPhone. More or less, they invented the smartphone and the ecosystem around it. I really do wonder whether things like Uber would exist if Apple never made iPhone. I doubt Twitter would be as popular. Entire industries have been created now because Apple made iPhone[1].
They're both world-changing companies that have had an enormous impact. This pissing competition of who's bigger and better is so unimportant and just misses the point entirely.
[1] This is actually an interesting thought exercise, for me at least. What would have actually happened if Apple didn't make the iPhone? It's been shown that before release Android pivoted from a Blackberry device (btw, Blackberry just announced they're going to stop making hardware) to a more 'iPhone-like' device once iPhone was announced.
The most favourable approach is that it would have taken Android and Microsoft longer to get to the stage that iPhone was to create the type of influence things like the App Store has.
"What would have actually happened if Apple didn't make the iPhone?"
I bet that nothing much different, another company would have made one in 1-2 years timeframe. The idea of "hand phone-computer" was very old (decades in sci-fi and real products in nineties called "PDA"). It just happened that around the time when Apple made the first iPhone, tech was advanced and cheap enough for the mass market. To sum up, it was perfect timing, not revolutionary idea.
You're forgetting the massive impact the iPhone had on the mobile web.
Before the iPhone was released, the mobile web was a mixture of rudimentary WAP/WML, or if you were lucky awful browsers that tried to render HTML. Everybody hated it.
Then the iPhone was released and it had a web browser in the same ballpark as real desktop browsers. What's more, WebKit was open-source. In next to no time, all the major phone vendors had a WebKit-based browser of their own and the mobile web took off.
Yes, if Apple didn't release the iPhone, eventually we would have gotten decent mobile web browsers. But phone vendors were doing an awful job of it up until Apple moved the whole industry forward. The only other possibility would have been Opera, but that would have required unrealistic licensing deals to have the same impact as the iPhone, and Opera ended up switching to WebKit too.
You're also forgetting about the effect the App Store had on the mobile marketplace. Before the iPhone, buying a mobile application tended to be something that nobody did. Every app had to roll their own payment infrastructure – it was painful for developers to charge and it was painful for customers to pay. Then the App Store came along and made it easy. There was little movement on centralising this beforehand from any other vendor, but as soon as Apple did it, everybody else hopped on the bandwagon.
I'm not arguing that smartphone was a huge step forward, just saying that it was only a matter of "when" and "who" not "if". Also, as for mobile web boom, I think the introduction of 3g was the key factor, there was only so much you could do with GPRS/WAP..
Yeah, that was my thought exactly. However, iPhone software was so radically different from everything else at the time that I still wonder "what" it would have been and what the App Store revolution would have looked like.
Of course, in this alternate universe it's just as likely that other amazing advancements happened because Apple didn't do iPhone.
Sure, it would have happened eventually, but not in the 1–2 year timeframe you mentioned. Not even close, judging by the lack of progress from other vendors.
3G wasn't the deciding factor, the first iPhone didn't have it but responsive sites started popping up right away. There's also no point in 3G for web browsing unless you have a decent rendering engine – WML and XHTML Basic sites were hardly data hogs.
You probably have very short memory, since when iphone 1 came out, everybody was shocked from how much better it was above anything else, and how different and better the whole experience was. Touch screen, gyroscope, sleek design compared to pen-based ridiculous slow boxy windows CE machines.
I am far from Apple fan boy and never actually owned any of their products, but boy they really brought mobile revolution to this world. It took competition many years to catch up with that, with great help of Google and their Android. Main reason why their share price skyrocketed during that time.
Blackberries predate iphone. But, you couldn't get one cheap with a phone contract. Also, it leveraged everyone's ipod song list. Not to mention the cool factor of the iphone.
>* I can think you can see the point I'm making: yes, Apple has a big pile of cash, a fat amount of profit and high stock value. However, Alphabet its influence and reach is far, far beyond that of Apple. Its just not as flashy & visible.*
Yeah, Alphabet is bigger in every way that counts except those two inconsequential things for companies: revenue and profit.
The exact same thing holds for Google Search. They have not generated any product that brings as much cash as search ads (not even close) -- and that's since 1998 or so.
Google Docs, Google +, Google Glass (lol) etc never went anywhere much, revenue wise, and even Android is mostly a loss as far as Google is concerned, even including licenses and mobile ads (makes a good buck for Samsung though).
> The market is nearing saturation, and competitive pressures will erode margins.
They serve the top end of the market (neither feature phones, not "just give me some smartphone free with the contract") where there are not much competitive pressures and they've been milking more profit than all Android shops combined at that. Besides, that we have been kept hearing since 2008 when Android appeared, and it hasn't even happened in the desktop/laptop space (the sell 5-10% of PCs and get 45% of the PC profits).
Google has not made anything that generates the profit, but they still can pull some levers to make their search even more difficult (remove verbatim for example) so you'd spend more time on it (thus generate more ad money).
Also they've got those cute/silly moonshots that just might pan out.
> However, Alphabet its influence and reach is far, far beyond that of Apple. Its just not as flashy & visible.
You just gave some very "flashy & visible" examples of Alphabet's influence while citing only a "big pile of cash" as Apple's flashy & visible trait.
Apple have always been the most secretive and they almost NEVER announce something before they're ready to advertise it.
Did anybody see the New Mac Pro redesign coming? What about the fact that they were developing a whole new language for 4 years before anybody suspected it, Swift? Now it's one of their biggest things ever (one of fastest growing languages on GitHub by some metrics.)
Hell, even the PowerPC to Intel jump could be considered a surprise, even though they had been working on it for 5 years. [1]
Now, just recently they revealed a bunch of stuff powered by offline machine learning in iOS 10 and the macOS Photos — all without sending your info to Apple — and frankly I'm more impressed by this and the differential privacy thing than with Google's efforts towards centralized AI.
If one were to make any guesses, I'd say Apple is in a unique position to build an AI network distributed across all their devices, without leaking privacy to Apple or anyone. This is very attractive to me as a user, even if I have "nothing to hide." There are over a billion iPhones and iPads and Macs combined, with beefier processors on average compared to the majority of Androids. Android's fragmentation may also make it harder for Google to pull off the same, even if they were to make a commitment to user privacy.
I think the winds of the world are changing direction; people are becoming increasingly aware about privacy concerns, and these issues keep making the headlines. EVEN if Apple is just using it as PR ammo, Google and Facebooks are objectively the worst offenders in this area and they don't look so nice and shiny a People's Champion in a privacy-focused future. It may take just one good open-sourced and distributed search engine, or a newer and more hip social platform (Snapchat looks like a good contender there), to unravel all their prospects.
I agree with almost everything you said, except this part.
> I think the winds of the world are changing direction; people are becoming increasingly aware about privacy concerns, and these issues keep making the headlines.
It seems like only a minority cares about privacy concerns to even understand them well and take some action. Even when privacy issues make the headlines, billions of people just shrug their shoulders (and perhaps a few thousand say a curse word or two) but still go back to the same old ways they've been using Facebook, Google, etc.
I feel sad that the winds aren't changing direction, or at least not fast enough to be a force to reckon with.
As they have been doing from the beginning. The masses only act contrary to top level input when desperately hungry. Even their righteous anger is scripted and handed down.
What has been the problem is that the critical non-elite core (say 10%) of society was ignorant of the consequences of these technologies and they are now getting a better sense.
We only have to educate the 10% and only need to provide secure software for these same 10%. Societies change when the 10% no longer accepts the rule of the 1%. This is the primary critique against geek efforts to get "grandma" using crypto. If grandma is part of the 10%, she'll be able to follow step by step instructions.
Yeah, I should've said "I'd like to think/hope the winds are changing" but still, things like all the new messaging apps that make end-to-end encryption a selling point, and services like Skype losing active users at a rapid rate, do point to an undeniable if yet-miniscule effect.
Even during casual conversations with some of my friends who wouldn't be expected to care about these issues, they occasionally let out a glib "Sorry NSA!" whenever they say something that's known to be picked up by their monitoring.
So there is change. There is a chilling effect. And a big player like Apple making a big deal out of it, does help.
If they shut down services, they slit their own throat and cut off their revenues as well.
Apple and Google are not the only players in this game, but Microsoft as well as others.
Rather than shut stuff off, Google or Apple if forced to pay taxes will just lawyer up and appeal it in some way.
Yeah developing on OSX/MacOS is better than GNU/Linux, but if you find a good IDE with the programming language you use on GNU/Linux it gets better. Plus GNU/Linux runs on the PC Clones out there as well as ported cross platform for ARM, PowerPC, SPARC, Etc.
A reason why I use GNU/Linux is that it is free and respects my rights as a consumer with no DRM, and if I don't like one version of GNU/Linux there are hundreds of others out there. GNU/Linux is not locked to just one brand of computers as MacOS/OSX is, nor does it have DRM and Telemetry in it like Windows 10.
This is total speculation but I imagine that Google captures less of the value it creates than Apple does.
If Android makes up 90% of the market share and is half as valuable as an iPhone and Google was capturing the same % of value created Google would have revenue numbers from android that are larger than Apples from the iPhone. But it's not even comparable.
> I can think you can see the point I'm making: yes, Apple has a big pile of cash, a fat amount of profit and high stock value. However, Alphabet its influence and reach is far, far beyond that of Apple. Its just not as flashy & visible.
Yes, I agree essentially with the above line (though Apple & Alphabet have comparable market values, which is why I said Alphabet has an excellent outlook). I didn't intend to start a minor flame war, as I meant "giant among giants" in the financial sense only, which of course is hugely important.
In terms of influence/clout (measured in eyeballs, political heft, economic clout via product ecosystems, etc.) Alphabet likely does edge out even Apple (perhaps not "far, far beyond"). However & without going into too much detail, I think my point still stands: from a "game theory" angle, there would likely be no strategic reason for Apple to join a partnership in this case.
Let's assume all these assertions are correct i.e. the disappearance of every Google product would hurt more than the same scenario but sub Google -> Apple. That doesn't actually imply that Google > Apple (in terms of general influence) in the world in which they both do exist.
My interpretation of your post is that Google might be more _important_ than Apple. And my counterargument is that importance is not equivalent to influence.
Why are you surprised that Apple isn't involved in a field that has no material impact on their business? Outside of Siri and related projects, I don't see where AI/machine learning has any immediate benefits.
Twitter and Intel missing seems like something to take note of, but I didn't even think twice about Apple. I'm more surprised GE is missing than Apple.
I use Android, but Apple are doing some good personal assistant stuff like tagging friends in photos, -without- uploading them to the cloud. A true personal assistant that can use machine learning locally and with discretion sounds pretty great
I'm really starting to appreciate this kind of thing about Apple. They seem to be concerned about keeping data in the hands of their users as much as possible, when the rest of the industry defaults to a cloud-hosted solution that requires the user to completely give up control.
That's a nifty feature, but I am sceptical that Apple can offer the same kind of experience when compared to cloud-based solutions that leverage massive amounts of aggregated data and backend processing.
Apple has made a pretty big deal about privacy, but they do collect user data (see the interview a few weeks ago with Apple's mapping lead -saying they are collecting user data to improve maps).
Ultimately if users want ultra smart AI assistants (Siri, Google Now, Cortana, etc.) they will need to give up some of their data.
That may be very unpopular with the HN crowd, but I suspect the vast majority of consumers will be OK with that tradeoff.
Apple also works on technology that is meant to help users give up less of their privacy. They talked about this at WWDC 2016 under the headline 'Differential Privacy'. They're trying to use statistics to make it harder to attribute data points to individuals while still providing a workable data set on the collective of users.
I use Google translate offline on Android when in China and it is great. Seems uploading the trained NN to a phone when online and then using it privately/offline is a great way to go.
At WWDC they talked about how they put AI all throughout the system. It's not just when talking to Siri. It's things like suggested apps, photo tagging, deciding when to refresh apps in the background, text prediction, and probably lots of other stuff that I can't even think of because Apple's approach to AI is to make AI largely an implementation detail of the experience rather than a user-visible component.
>> Apple's approach to AI is to make AI largely an implementation detail of the experience rather than a user-visible component.
That sums it up nicely, I'm a bit embarrassed those features didn't cross my mind when thinking about Apple/AI. Reinforces your point - its use in invisible, much like their presence in this consortium.
The big area for AI is to replace a human being with an AI program and not have the customer notice it.
Then those webchat and help desk jobs will be gone and replaced with AI doing robocalls and acting like a chatbot.
I'm getting Robocalls that ask for a Chris or something and as soon as I say there is nobody here by that name (so they can take me off their list) the bot cuts in saying "Maybe you can help?" and then tries to sell me something. Most of the time the Caller ID is spoofed and it calls back with a different Caller ID and number, etc. I'm on the Dot not call list, but Robocallers just war dial numbers until it reaches someone and then tries to hook them in.
Not only that but AI can take the jobs of writers, programmers, managers, legal assistants, and in robot form burger flippers and checkout kiosks.
As we reach 2020-2030 the AI is trying to get as smart as a human being. As long as it has electricity it can work 24/7 without getting tired or hungry like a human being would.
Because machine learning is one of the most important tricks they're going to be able to use to make their devices even more intuitively usable. There should be entire teams at Apple salivating at new ways to extend powerful functionality to users _without_ making the interface to do so gross and complicated. And they probably are -- they've acquired several ML startups lately. (Hi, Turi friends!)
I'm surprised at this "Apple not AI" narrative. I've had Google Maps & Waze on my iPhone for years and neither of them seamlessly turned on "where did I park my car" feature. I just turned my iPhone on one morning and it just told me. Now, mind you, this may not be a fancy deep learning model. It could be some simple linear model, maybe a tree model, hell, maybe it's hardcoded rules? But despite these things Apple delivered a great feature which is in the realm of what I would consider "AI enabled feature".
Mind you, I don't have an Android device, so maybe Google Maps does this automatically for you on Andriod. It doesn't do it on the iPhone, at least not automatically (aka. I don't bother to look it up, which is kind of the point).
It uses a combination of Bluetooth (if you're near/connected to your car's stereo), motion sensor and GPS.
If you started moving at 70KM/h nowhere near a train or bus station, and then suddenly stopped moving at that speed and shortly after started moving at 5KM/h, you've clearly just got out of your car. Use bluetooth for extra verification if possible. When certain, get location (or load location) from GPS and mark spot. Tadaa, no fancy learning model needed.
Acutally not neccessarily. You can as well end up in a traffic jam. In fact that's how they display traffic info on maps: many phone users doing what you just described.
Well, not surprised of course! Anyway, perhaps I'm just a simple man, ... besides searching the web, I think that's the best AI type feature for the consumer yet!
The new version of the Photos app does image recognition. It seems to work fairly well, and unlike Google Photos, happens locally on the device and doesn't require cloud uploads.
Likely this is part of why this group was created. They were probably getting tired of Apple trying to ride on everyone's hard work without giving anything back.
> As for AI, Apple is a small and outsider player which will hurt in the long term.
Since you didn't explicitly state whom it will hurt, the devil in me reads this sentence as Apple's involvement (or lack of it) will hurt all people. :)
There's no conspiracy here. Apple is just not involved in the machine learning community in the same way as these others. Look at NIPS and you'll see activity from all of the other participants in this announcement. Apple isn't present in the same way.
apple isn't hurt by this at all. they likely don't want to commit any resources beyond a spokesperson mentioning that their enthusiastic about it.
this partnership thing is a pr move that makes all involved look good. investors and shareholders alike thinkin "cool they're working on ai, that's good"
what will come of the partnership? nothing other than a place journalists can ping for questions about ai.
AI is by nature abstract. Once it's not abstract, there is typically a more useful term for it: classification, recognition, search, etc etc. What do you mean by doing AI right? Apple's track record with doing something right applies to product lines, not their technology.
What's the purpose of this initiative? Sharing technology? Hardly. The goal is probably to shape the discourse on AI and its implications on society and the individual in a way that's favorable for these companies. In other words, they will try to preempt, counter, and suppress criticism of their business models, i.e. the AI exploitation of user data in the service of advertisers and others. It's pretty obvious why Apple is not on board. They have previously taken the position that user data should be left alone and therefore pose a threat to Google, Facebook et al. whose financial success is solely built on the extraction of information from users. This has nothing to do with Apple falling behind technologically.
Are LeCun, Corrado, etc. actually running this? They're pretty busy, and the website doesn't sound like them:
"We believe that by taking a multi-party stakeholder approach to identifying and addressing challenges and opportunities in an open and inclusive manner, we can have the greatest benefit and positive impact for the users of AI technologies. While the Partnership on AI was founded by five major IT companies, the organization will be overseen and directed by a diverse board that balances members from the founding companies with leaders in academia, policy, law, and representatives from the non-profit sector. By bringing together these different groups, we will also seek to bring open dialogue internationally, bringing parties from around the world to discuss these topics."
This sounds like it was written by some PR person. Google and Facebook are "IT companies"?
"Finally we announced the Partnership on AI to benefit people and society!
Amazon, Google/DeepMind, Facebook, IBM, and Microsoft will collaborate to advance understanding of AI and discuss best practices on challenging issues such as ethics and trust. The five companies are founding the initiative, but everybody is invited to join.
Representatives are Eric Horvitz (Microsoft), Yann LeCun (Facebook), Mustafa Suleyman (DeepMind), Ralf Herbrich (Amazon), and myself from IBM.
More info at www.partnershiponai.org
Looking forward to start working together on this exciting initiative!"
"A picture of the co-founders of the Partnership on AI, at the IBM Watson Headquarters in NYC (on Astor Place, across the street from my Facebook office!).
With Eric Horvitz, Francesca Rossi, me and Mustafa Suleyman.
Ralf Herbrich joined us by phone from Germany."
You could define an IT company to be a company whose primary business is computer-based storage, retrieval, and manipulation of information, and with such a definition what company could be more IT-like than Google or Facebook?
To quote Pedro Domingos in "The Master Algorithm" [1]:
> But everyone has only a sliver of it [information about you]. Google sees your searches, Amazon your online purchases, AT&T your phone calls, Apple your music downloads, Safeway your groceries, Capital One your credit-card transactions. Companies like Acxiom collate and sell infor- mation about you, but if you inspect it (which in Acxiom’s case you can, at aboutthedata.com), it’s not much, and some of it is wrong. No one has anything even approaching a complete picture of you. That’s both good and bad. Good because if someone did, they’d have far too much power. Bad because as long as that’s the case there can be no 360-degree model of you. What you really want is a digital you that you’re the sole owner of and that others can access only on your terms.
Does this mean that effectively all of Facebook, Amazon, Google, IBM and Microsoft will have the whole picture? That makes me worried.
Who gives a sh!t that Apple was not at the meeting. I think the main takeaway is that 4-5 companies might control one of the most powerful technologies/ideas of the last 5 years. Its already hard enough competing with these companies how is this good for everybody else?
edit: "The group plans to make discussions and minutes from meetings publicly available." I guess that is cool. But what about the competitive advantage these companies have right now? Does anybody monitor that stuff? I am not talking robots and singularity crap but rather companies that have means and the ability to make every decision with ML and the data to create an insurmountable competitive advantage.
It's not fear mongering. Some of the ideas I hear from people on this forum, from academia, and the likes are just downright terrifying, and we don't even know what future AI will be capable of.
We have to decide as a society, and decide really soon how far we are willing to take AI research.
We built the nuclear bomb, but we decided when to stop. We didn't build more advanced bombs that could level entire providences, even though we very well could have. I know this sounds ridiculous, in that we aren't talking the same level of potential devastation, but it's the principle I am after.
AI is not a genie that can be kept in a bottle. Making a nuclear bomb is a huge engineering effort. You need to build huge facilities, acquire rare resources, and potentially test it somehow. It takes a lot of energy to get there.
Software on the other hand, my computer is running tens of millions of lines of code right now, and for stuff with heavy utility there's not much the government can do to get in the way. Cryptography is a great example of this. Cryptography used to be regulated, but because it's easy to pass around code + binaries + keys there was really nothing the government could do to stop people from using it. So it was deregulated, as an admission that if you banned cryptography, you get a population without it pitted against criminals who do have it.
I'm guessing that AI will end up much the same way. You do need a lot of data, certainly a lot more than you need to get strong encryption, but data is ultimately easy to move around compared to nuclear manufacturing facilities. And it's even more open when you consider how much data is available that might be useful to an AGI. If you give a human 500,000,000 hours to read + process + innovate using just Wikipedia, you are going to get something impressive. An AGI has access to any page that's reachable from a URL bar, and that data is not something that needs to be passed around with the AGI codebase.
Granted, I personally believe that AGIs are 30+ years away, and I don't think that Wikipedia is enough data for something like a competitive NLP machine using modern technology, but I think that's the direction we will be moving in. We should accept today that AI will be a part of the future, and prepare for the inevitability rather than try to run from it.
Yes, and that's why atomic weaponry is unusable and largely irrelevant. In contrast, AI is eminently usable in business and government, and in such a wide range of means and purposes that it is much more likely to change our lives, in ways that are largely still unimaginable. As more AI is adopted and shapes policy and practice throughout the world, business, gov't, and Joe Public will gradually relinquish ever more personal information to it, inviting excess oversight, micromanagement, and abuse by those who control the AI.
After all, the corporations joined in this partnership are the very behemoths that gather and control almost all the world's social and personal information, which is their primary product and essential to their revenue stream and their very survival. If the public (and gov't policy makers) ever conclude that these firms and their AI are a rising threat, then like various European countries, the US too might enact 'onerous' regulation which could diminish their access to our data, AKA their life's blood.
I think THAT'S what this AI partnership is really about. It's the start of a political action initiative, apparently led by unthreatening figures like CS academics rather than capitalists. It's intended to diffuse the visibility of these companies' advancing political interests and lobbying in Washington so their growing injection of AI into our lives won't suddenly show up on our radar one day soon as an inbound threat from the clear blue sky above. Once they gain a sufficient foothold in Washington and assure that we remain inured to the idea of AI, they can turn up the heat until we froggies boil, blissfully oblivious to the rising clouds of steam.
Whether people agree with the philosophical or political aspects of your assessment aside, this was one of the more thought-invoking comments I have read on this subject in this thread and threads like it in a while. Thank you for writing it.
The end of the Cold War took a lot of the urgency out of nuclear weapons research, and with the urgency went the funding. That's why the US nuclear arsenal consists of designs from the 1960s and 70s, with little or no manufacture more recent than the 80s, and a litany of canceled programs from the 90s on.
Quite aside from that, there's a point past which chasing higher yields results in a loss of strategic flexibility, which is why both we and the Soviets eventually gave that up. (You can attack one big target with multiple warheads, but you can't attack multiple targets with one big warhead.) Modern nuclear weapons research and engineering, whatever there may be of it, would concentrate on boring stuff like improving efficiency and reliability, and reducing mass.
There's a lot to be afraid of even if you don't believe that AGI is achieveable. For example, Google has openly talked about manipulating search results to steer potential ISIS recruits away from ISIS. That's a power that they are using without oversight, and who's to say that they aren't doing similar things to influence the outcome of the US national election (they have a vested interest in seeing Hillary win - Hillary is much better for their bottom line than any other candidate).
The giants are unparalleled in terms of their access to data, and that's something alternatives and competitors can't compete with. It's also clear that for machine learning, having more data is the most effective way to get better results. Stronger algorithms are only going to go so far if your competitor has three or four orders of magnitude more data and computational power.
And that's the scale we are looking at with companies like Facebook and Google.
Google isn't "manipulating search results to steer people from ISI," nor have they suggested it. A bunch of advertisements were bought that would link ISIS keywords to videos of anti-ISIS propaganda. It's something anyone in or out of Google could do.
It's only fear mongering if it's deliberate for some ulterior end. If spreading concern is only motivated by protecting the human race from destroying itself--that would be a reasonable thing to talk about. Who's benefiting from urging caution about potential bad outcomes for AGI?
AGI and heating oil? There's a troubling gap in reasoning there. Narrow AI and automation replacing jobs has its own set of problems. But a machine intelligence explosion is a completely different thing, and a legitimate existential risk. It's really important to understand the difference and the specific arguments.
Still, you can't find anytime in history where we ended up saying "thanks we were careful" related to new technoloies. It's usually the reverse, governments delaying change for not smart reasons.
Anthropic principle. You are not observing the counterfactual universes where people weren't careful enough around certain dangerous technologies, because in those universes all the observers are dead.
Sure you can. When people working on the Manhattan project calculated how far away they could stand from an atomic bomb detonation--that was an appreciable precaution.
There are AGI risks that are well-reasoned and have detailed descriptions. If you aren't engaging with those arguments, proving them invalid or providing a well-reasoned protection, you aren't representing a coherent point of view.
>If you aren't engaging with those arguments, proving them invalid or providing a well-reasoned protection, you aren't representing a coherent point of view.
Since when does refusing to engage with fear mongering make you incoherent?
I already described the difference between fear mongering and pushing for caution, so don't conflate those terms. It's incoherent to not engage the arguments--take the control problem into consideration. Are you familiar with the reasoning for the expectancy that an AGI will be built that is smarter than humans at any reasoning task? Are you willing to go along with that? If so, then what follows is that the AGI improves the design of its own software and hardware better than any human engineer. At this point, there is a recursive improvement process termed an "intelligence explosion". From that point forward, we have no idea how to ensure that the goals of the AGI remain in alignment with human values. And the AGI can out-maneuver any attempt to turn it off. It's not guaranteed that it will want to avoid being turned off; but you could easily see how avoiding being turned off would improve the probability that it achieves its optimization parameters.
Where do you check out?
Edit: Nick Bostrom is the thought leader on this subject and he spells it out in this talk: https://goo.gl/DzNnk2. His book, Superintelligence, is really good.
Edit 2: And here is the audio for a Sam Harris talk also on this subject: https://goo.gl/6wdG43
Except automation unlike all those others you reference in that they all created new industries. AGI to cut the jobs and save labor costs for these large corporations.
The organizational structure has been designed to allow non-corporate groups to have equal leadership side-by-side with large tech companies.
Anybody know more details? As non-corporate entity the opportunity is very interesting due to the potential of having access to their infrastructure. The cost of running AI projects on the cloud is currently prohibitive and am forced to run on performance limited machines.
As usual Apple is missing. Pleasantly surprised to find Amazon on the list of collaborators. They usually take from open source/communities and rarely give back. This is a good change.
So our choices are to build a peeping tom or a terminator? Whatever we build is likely to be both. (Any open-ended goal necessarily has subgoals of 'acquire information' and 'defend self by force if necessary'.)
Lockheed Martin, their revenue stream isn't based on mining your personal information and communications. With Google at the helm there is almost no chance this stuff won't be always executing on their servers rather than in your device alone.
Analyzing data given to them or public is not the same thing as exfiltrating data from your users devices in misleading ways (e.g. Sending wifi info back even when wifi is off).
The weapons industry is one of the dirtiest and most corrupt around. I'm having a hard time understanding how concerns about which server your AI is running on trump an industry that distorts politics, misappropriates public money, and contributes to the corruption of various regimes.
Actually that sounds like some shade tossed on this endevour. i.e. They were either not asked, or had to pony up money that only a multibillion dollar megacorp could part with.
> They were either not asked, or had to pony up money that only a multibillion dollar megacorp could part with.
Perhaps you're right, but at least it seems like OpenAI would like to join if/when invited. Judging from the press release it sounds like this partnership intend to invite non-corporate/non-profit members soon (and I think they'll loose a lot of credibility and support if they don't):
"Academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization, named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI)." ... "There will be equal representation of corporate and non-corporate members on the board of this new organization"
If you are working for any of these companies, you should really consider if it is worth it, and possibly stop or switch to more meaningful and less evil endevours.
Calling simple statistical clustering algorithms that are tweaked by lots of trial-and-error heuristics "AI" feels like calling those slow two-wheeled electric self-balancing skateboards "hoverboards". Sometimes marketing can be too dramatic.
> Deep learning is not a simple statistical clustering algorithm.
What is it, then?
In the 60s people used backpropagation to train neural networks. NN + BP is a very simple statistical clustering algorithm. I know that when I worked on neural networks in the 90s we still used backpropagation. Are they using something different now?
No, I don't think so. AFAIK, deep learning is essentially the same 1960s algorithms[1] (possibly modified a bit) running on much larger networks. Most progress is due to better hardware (and ad hoc configurations, made possible by the larger networks afforded by better harder). Of course, SAT solvers, which have become extremely effective in recent years, are also still based on a 1960s algorithm[2], so use of an old algorithm doesn't imply lack of progress in effectiveness.
The two (NN and SAT solvers) share little theoretical progress (and certainly no theoretical breakthrough) in the past several decades, but SAT solvers aren't marketed as "AI" in spite of their seemingly magical abilities. I know that ML researchers usually cringe at the name AI and often try to disassociate themselves from the sci-fi term, but still, the marketing is extremely aggressive and misleading.
I realize that in every generation, marketers like associating the name "AI" with some particular class of algorithms, but it's important to understand that currently, assigning that name to this class of statistical clustering algorithms (regardless of their remarkable effectiveness in some tasks) is a stretch, just as it was when the term was assigned to other algorithms.
I don't deny that it works, but it isn't AI (not that statistical clustering isn't possibly a foundation for AI -- we have no idea -- but the current state-of-the-art is a far cry from the sci-fi meaning of the term).
AGI is a recent term coined precisely because the name AI has become confusing, which is precisely my point. It is also not widely known and used for the large part by people who believe in a sci-fi rapture-like event called the "technological singularity[1]", and so carries a bit of a cultish connotation. It doesn't do much to allay the confusion, though, as the term AI (even if understood to refer to something weaker than AGI) can be confused to be a component of AGI. I imagine that at some point, something called AI may eventually lead to AGI, but we have absolutely no idea whether what we call AI now is even on the right path, so the confusion remains.
It is true that some algorithm has been called AI for decades. It wasn't always this one or anything similar to it. Both Lisp and Prolog were thought to be AI languages at one point.
My point is that the word "AI" has been referring to "weak AI" for decades. Everything from video game AI, expert systems in the 80's, genetic algorithms, searching algorithms, etc, have been called AI. There is no point arguing about the word "AI" now.
The word AGI is pretty established. There are AGI conferences. I've heard it used by a wide variety of people, not just singularitarians. "Strong AI" is another common term.
I don't think there is much doubt at this point that neural networks are on the right path for AI. They are extremely general, have made remarkable progress in widely different AI domains, and are the closest AI approach to the human brain.
> I don't think there is much doubt at this point that neural networks are on the right path for AI.
Not only is there doubt, I don't think any NN researcher would even dare to suggest (based on scientific knowledge; not as a mere conjecture) that neural networks, and certainly current NN algorithms, have anything to do with AGI, which, at this point, is still a dream or a sci-fi concept.
> They are extremely general, have made remarkable progress in widely different AI domains
They work precisely where statistical clustering works, because that's what they are. Statistical clustering is extremely effective.
> and are the closest AI approach to the human brain.
We don't know that. It is possible, even likely, that statistical learning plays some low-level role in the brain. We know little beyond that, but it is pretty certain that neurons in the brain work very differently from neural networks. I don't think anyone imagines that backpropagation is used by the brain.
I remember learning about oligopolies and cartels in high school economics. Pretty sure this is at its core, a form of collusion meant to undermine the competitive spirit of the market. Ethical implications of this strategy are quite dire at best. No one stood up to the robber barons then and likelt no can now.
The moment I saw the headline, I noticed Apple missing from the list, and it felt right! Facebook, Amazon, Google...Microsoft...IBM...all coming together to promote (sell) AI? This sounds like the coming together of the evil powers.
Apple, however successful it may continue to be financially, needs to focus on a wider penetration of its devices and services if there is to be any meaningful dent on the privacy front around the world. Being a market leader in one country (or a few) doesn't help much when billions of people around the world use Android phones where the default is "ask for any permission and it shall be given." For this to change, I believe Apple must go lower on the price front, even if that means lower margins. It also needs to push forward quicker on things that other companies don't consider, like differential privacy, and look for markedly different ways of doing things compared to the personal data hungry parasites like the ones in the title.
Good point. I find it funny to read other comments here of "who cares about Apple not being in the group?". Apple is just one of the most successful companies of all times and it demonstrates over and over to have a bit more integrity and commitment to guarding user's privacy. Kudos to Apple.
Wow that's really sad that they have refuesed to use the OpenAi operating syestem which shows exactly how much the really care about there jobs reflections
At the risk of ad hominem, this is typical techcrunch reporting:
>> "Though Apple is said to be enthusiastic about the project, their absence is still notable because the company has fallen behind in artificial intelligence when compared to its rivals — many of whom are part of this new group."
How exactly is it that TC knows that Apple has indeed fallen behind? Are they privy to the Apple ML roadmap? Are they using lack of open source activity as a metric to make this claim? Is there an unidentified source who can objectively measure the ML progress across these organizations, and using this objective metric, conclude that Apple is behind?
It's a claim without much substance, and paints Apple in a negative light. You could say that this is a marketing failure on the part of Apple, and you might be correct. For example, see the article floating a few weeks ago on Medium (I think) on how Apple was embedding ML in everything.
In the days of price performance wars in CPUs (and GPUs), there were more or less objective (err, almost objective) benchmarks that people could point to. This is not the case with ML/DL. It would be great if we could say: "Across image classification, the precision / recall is X, vs. Facebook's Y. Clearly, Apple has more work todo in image classification. But in Machine Translation, Apple is ahead, with metrics A vs. B from Facebook..
What is happening with ML/DL/AI/whatever is that all companies are using the same bag of words to describe what they do, but the popular press is not discerning enough to make heads or tales out of what they report on, and they end up mis-educating the public.
> there were more or less objective (err, almost objective) benchmarks [...] This is not the case with ML/DL.
You are completely wrong about this. Microsoft, Facebook, and Google have all participated in public, objective ML competitions such as ILSVRC. They also all publish academic papers with results on standard ML benchmarks, and often running code as well. They even publish their own benchmark datasets such as the widely used MS-COCO or today's YouTube-8M. Apple does none of this, and that's where the perception that they are behind comes from.
To clarify, this was not in reference to the lack of metrics in the field, which exist, obviously. Nor was it in reference to the lack of metrics / citations in academic and scientific publications. This was in reference to a lack of specific citations/metric backing up a claim, from a popular publication targeting a mass audience does not necessarily have the subject matter expertise to fill in the data on their own. The information you provide comparing Apple's lack of benchmarking to others would have been useful to have in the article.
Honestly this comment reads like an emotional outburst, by someone who has zero clue about the domain.
Its very well known fact in AI/ML community, that Apple has almost little or no talent nor do they have any major efforts at organizational level (E.g. FAIR at FB, MSR, Brain & Deep Mind at Google).
>> But in Machine Translation, Apple is ahead.
LOL where did you get this from? I am pretty sure that Google NTM which was put in production yesterday is the state of the art.
Also there are metrics e.g. in the report released yesterday you can find BLEU scores on WMT 2014 tasks, and the clear conclusion is that Google is way ahead. Also when it comes to Imagenet or Coco challenges, I don't think Apple has competed in any let alone placing anywhere at top, while FB, MSR and Google all have had top models.
http://arxiv.org/pdf/1609.08144v1.pdf
>> but the popular press is not discerning enough to make heads or tales out of what they report on, and they end up mis-educating the public.
Sorry you are wrong, the sad reality (for Apple), is that they are truly 3-4 years behind FB, MSFT & Google.
"But in Machine Translation, Apple is ahead." was an example of a statement that could be made if metrics existed, he is not asserting that that is true.
I assumed that he was asserting it to be true, because as my updated comments shows, the metrics do exist. But Apple neither releases them nor does it discloses the architecture/technology (if any) used.
As others have mentioned, you are completely wrong in your assumptions. It is you who shows zero clue in reading comprehension and communication. I'm well aware of the benchmarks used by industry and academia. I've been working in the industry for many years, including multiple ML-based products used commercially and running in production serving a large number of customers. Machine translation was just an example, and there was no assertion that company A is better than B; I was in no way "asserting it to be true". I was merely providing an illustrative example to highlight the fact that * popular press * articles about ml, such as this one, are quick with subjective opinion, without providing in writing the necessary citation and metrics to back their claims. Furthermore, they do so at a generalized level, without going into specific subdomains. My initial post was intended as a constructive comment, so take it as such.
Following that line of reasoning lets just assume that Apple has solved Quantum Computers, Teleportation and Room temperature superconductors why stop at AI/ML
> Apple can't figure out that I don't want to say 'ducking' after hundreds of times of dismissing it with their keyboard.
Or they can predict, but don't because it's a swear word and it's a product decision.
Apple's historical secrecy does hurt them here. However, they have taken the stance to not slurp everything up and pack it off to the mothership like Google, which I understand has led to some interesting work on resource constrained models (since it has to run on your device) and data anonymization.
All you have to do is type "fucking" ONCE, then tap the word "fucking" in quotes, on the left-hand side of the suggestions/corrections bar, and the word is learned as an acceptable dictionary word. Permanently. Forever. And that persists when you change devices.
Anyone feigning ignorance of this feature at this point just isn't trying very hard.
Uh, you're just making this up. You have no idea whether the "microphone" (by the way, for those who are not fact-averse, it's dual beam-forming mics) in the new wireless earbuds is "laughable".
They aren't even released yet. You've never tried them. You have no idea whatsoever.
By the way, it's actually very easy to teach Apple's devices the word "fucking". You haven't figured it out yet, but that doesn't make it Apple's fault. Did you try Googling it yet or spending five seconds figuring it out yet? I can assure you, I'm quite able to use any profanity I want on Apple's devices; they learned about my inclination towards profanity years ago and have flawlessly handled it since.
>They aren't even released yet. You've never tried them. You have no idea whatsoever.
This is a funny assertion.
I've used a microphone before, I've used headphones before, and I've used Siri before.
Call me brazen but I believe I can imagine the advanced world that will exist post Apple wireless earbuds, dual beaming microphones and all.
You seem really impassioned about for a product that you also haven't used and the media has generally laughed at and dismissed.
You also didn't actually respond to my main point about them, that Siri never delivered on its value proposition. If it had, it might be considered a great product.
> You seem really impassioned about for a product that you also haven't used and the media has generally laughed at and dismissed.
I didn't read it as being for a product, but instead, against people being against it without having tried it. Granted, it's certainly possible to be against something without having tried it (for instance, I've never broken a bone in my body, but I'm pretty confident in my stance against ever wanting to do it), but when it comes to something like how well a new microphone works, it's really hard to say without having tried it and without having anybody else who's tried to it rely on. Relying on the media you've read that "laughed at and dismissed" it isn't relying on much at all, until it's actually released and properly reviewed.
He is referring to the fact that a mic with poor speech to text models isn't that helpful. Which is not made up, Apple is definitely behind Google (and perhaps others) in the accuracy of their mobile speech to text.
Well are we talking about applied AI or theoretical AI? As a dumb user (and I use Apple products), I definitely get the feeling that Apple is behind in the game. Now do they have kick-ass R&D that isn't getting shipped to users? Perhaps, but that's not what customers and investors care about.
>> "Though Apple is said to be enthusiastic about the project, their absence is still notable because the company has fallen behind in artificial intelligence when compared to its rivals — many of whom are part of this new group."
could easily be replaced by "Apple decided not to join the group because they're so advanced compared to its rivals. This group was actually created in an attempt to catch-up with Apple".
Both of these extreme claims aren't really back up by substantial facts.
I'll add that I've owned both a Symbolics 3620 and a MacIvory. I was being a little glib, but I wouldn't mind seeing some kind of resurrection of the Lisp machine technology.
Oh, great. So all the companies that have recently had the most problems with ethics issues and user privacy issues are now collaborating in order to more effectively address those issues? Pardon me if my scoffing is audible.
IMHO ethics will effectively be the defining characteristic of AI in the future. The companies who innovate AI, however, will probably never mention it except as marketing copy.
That leaves literally everyone else holding the bag in regards to judging the industries actions and holding it accountable for transgressions. I'm not sure how that will work out, but I'm not terribly hopeful...
It may be a focus for people who are not actively working on the machines, but for people who are actively innovating in the AI space, the most powerful machines will be created by the people who are spending the least amount of time stressing over the ethics of various decisions. More powerful AI means a more competitive business, so the industry will self-select for those with lesser concerns for the ethical implications of AI.
We already know that we are unable to hold industry accountable for unethical action. How many times did we scream about Facebook's increasing privacy abuses? How many times do people talk about being uneasy because Google knows where their flights are, where their home is, where their favorite restaurants are, even if it was not told explicitly. And, despite all the complaints, the industry giants committing the worst atrocities remain the biggest giants. It's because violating the rights of your users makes you competitive, and the users can't tear themselves away from the increased power that it lends the features.
We need to accept that when AI arrives, we're not going to have very many controls over what it does. Regardless of how terrible the implications might be, we're going to be about as effective at stopping the arrival of AI as we have been at stopping the arrival of global warming. We need to prepare for a post-AI future and just accept that it's going to be invasive and have relatively little regard for human ethical concerns, and instead be focused almost entirely on the things that make it competitive.
A major book about ethics, "After Virtue", takes as half of is subject matter what the author calls "the interminable nature of ethical debates" and the failure of post-enlightenment ethical thinking.
I don't see how AI is going to suddenly make us capable of ethical reasoning on a large scale... unless... maybe AI could do the reasoning for us...
I'm probably going to be downvoted to oblivion, but I would actually be more at ease in general if most people would defer critical judgments to a reliable and open-sourced AI. For example in terms of driving, my discomfort about how an AI would handle being Kobayashi Maru'd is far less significant than my discomfort about encountering a teenage driver.
1. These companies have been collecting our information for years now. Some have acces to what we write in are emails, but of course, they never read them, they just scan them for marketing purposes?
2. Why do I feel certain people's information has been looked at, scrutinized, cross checked, collated, etc. by certain savvy insiders. Warren buffet, George Sorrows, any of the financial movers and shakers, information is sitting on a server somewhere, unless you're a Clinton. If I had access, I could help but look at it.
Before you made an investment, bought a stock, bought realeste, took over a company; wouldn't you be tempted to peak at some of that information?.
3. I feel certain individual information has been used as research for financial gain.
3. I belive it's basically insider trading without the other guy knowing he/she gave away any information.
4. I believe it will be exposed, and will be the next huge Financial scandal.
5. I believe this move might be a smoke screen. "We know some of us have already abused private information for personal/financial gain. Let's combine the data. It might put some reigns on what we all know some of our insiders have been doing. Let the people think we are doing this to better society.
6. I don't have any evidence, I just have a hard time believing no one is looking at juicy date pouring in from some high profile people.
7. I believe it will be on the front page of Forbes in less than a year.
Okay, which AI are they talking about? The term can mean various things. I mean if this were merely heuristic neural networks, one would think that Tesla would be included.
The funny thing is, the companies are ALWAYS going to put a positive spin on this. Not very different from the WhatsApp "we won't show ads, ever" messaging. Now I am in the camp which says "Fool me once, shame on you. Fool me twice, shame on me". Almost none of these companies can be trusted at this point. [1] Their refusal to ask OpenAI to be at the table really does not reflect well on them [2]. And the less said about the tenured professors who are now becoming company mouthpieces saying things like "we create products which cannot make profit but which is meant purely for data collection" the better [3]. And lastly, if these companies had such a sincere desire to "improve AI for the sake of humanity", how about they start by letting OpenAI (or a similar company) do a data audit of all the information they share so that we can actually be certain it is not just a data brokerage masquerading as a public service?
I wanted to say that I wish the AI community will boycott this effort completely. I find it a bit worrying that this community now resides almost entirely within the walls of corporate America.
[1] Interestingly, the only company which is even making noises about user privacy is Apple. Is it possible they saw something in this partnership that they didn't like?
[2] https://twitter.com/OpenAI/status/781243032582578177
[3] https://news.ycombinator.com/item?id=12428883