Hacker Newsnew | past | comments | ask | show | jobs | submit | mgevans's commentslogin

It's like Drum Buddy without a soul

http://www.drumbuddy.com/ https://youtu.be/vvtssL8WlJA


Seriously awesome, spatial, acid, full package.


Reminds me of ARDI Executor, which felt incredible in its day.

https://en.m.wikipedia.org/wiki/Executor_(software)


Thank you for the compliment. I wrote the beginning of it (B&W graphics (QuickDraw), filesystem, port to Sun3/60, port to NeXT), although people smarter than I did some of the heavy lifting (port to DOS, port to Windows, color graphics (QuickDraw), synthetic CPU.


What an amazing project! It seems almost like a path to madness, trying to reverse engineer the platform, but there were applications even very early on that were mac only that were desirable on PCs.


I tried to resell it at my PC Shop but could not get any buyers. 68K Macs sold for $100 on eBay like the Mac SE.


I was just going to mention this as well. It seemed like magic back in the day.


Executor was released under an open source license in 2008. I wonder if any of its code was used in Advanced Mac Substitute.


Executor used some fiendishly clever tricks to achieve performance on 1990s hardware at the expense of portability (e.g. the possibility of 64-bit support). Advanced Mac Substitute has different priorities and doesn't use any code from Executor.

However, ROMLib looks like a good source of unofficial system documentation, which I expect I'll be consulting in the future as I work on parts where Apple's docs become less detailed.


I just ran into this with some pages in our product as well. If you run Chrome with '--enable-logging --v=2' the chrome_debug.log will contain messages from the phishing classifier (search for 'phishing_classifier'). I was able to tweak the wording on the page to drop the score below 0.5, but there are other features that may be causing your problem.

You may need to restart the browser between edits, as it seems to cache the classifier results by URL. It also skips classification for hosts with private IPs, I had to jump through some hoops to test.


For offering a concrete self-help approach among a sea of speculation, and sharing that text on the page changes classification score -- I hope you get upvoted more.


Very nice! This will actually be very helpful in tracking this. Thank you.


Here is the output snippet. Basically some "algorithm" thinks it has found phishyness with some score above 0.5 and flags it. No clue as to what caused it (We know that it can be triggered by simply changing the name of the "Login" button to "Connexion"!!

Must be nice to dream up some "algorithm" and push it out.. sigh

[5570:1799:0701/133949:VERBOSE1:client_side_detection_host.cc(221)] Instruct renderer to start phishing detection for URL: http://dev1.codelathe.com/ui/core/index.html [5579:1799:0701/133949:VERBOSE2:phishing_classifier_delegate.cc(238)] Not starting classification, no Scorer created. [5579:1799:0701/133950:VERBOSE2:phishing_classifier_delegate.cc(238)] Not starting classification, no Scorer created. [5570:1799:0701/133954:VERBOSE2:client_side_detection_service.cc(255)] Sending phishing model to RenderProcessHost @0x7aa18a00 [5570:1799:0701/133954:VERBOSE2:client_side_detection_service.cc(255)] Sending phishing model to RenderProcessHost @0x8043d620 [5579:1799:0701/133954:VERBOSE2:phishing_classifier_delegate.cc(283)] Starting classification for http://dev1.codelathe.com/ui/core/index.html [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlTld=com = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageImgOtherDomainFreq = 0 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlOtherHostToken=dev1 = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlPathToken=html = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageLinkDomain=tonido.com = 1 [5574:1799:0701/133954:VERBOSE2:phishing_classifier_delegate.cc(275)] Not starting classification, last url from browser is , last finished load is chrome-extension://jpjpnpmbddbjkfaccnmhnkdgjideieim/background.html [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlPathToken=core = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageTerm=password = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageHasTextInputs = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageExternalLinksFreq = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageHasPswdInputs = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageSecureLinksFreq = 0 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageTerm=connexion = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlDomain=codelathe = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlPathToken=index = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageTerm=account = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageHasForms = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageNumScriptTags>1 = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageNumScriptTags>6 = 1 [5579:1799:0701/133954:VERBOSE2:phishing_classifier_delegate.cc(211)] Phishy verdict = 1 score = 0.548927 [5570:1799:0701/133954:VERBOSE2:client_side_detection_host.cc(447)] Feature extraction done (success:1) for URL: http://dev1.codelathe.com/ui/core/index.html. Start sending client phishing request. [5570:1799:0701/133954:VERBOSE2:client_side_detection_host.cc(415)] Received server phishing verdict for URL:http://dev1.codelathe.com/ui/core/index.html is_phishing:1 [5570:1799:0701/133954:VERBOSE2:client_side_detection_service.cc(255)] Sending phishing model to RenderProcessHost @0x802b7ff0 [5580:1799:0701/133954:VERBOSE2:phishing_classifier_delegate.cc(259)] Toplevel URL is unchanged, not starting classification.


Thanks for the really useful tip to look into Chrome's debug log.

First of all we see that this so called phishing detection filter's code is found at http://src.chromium.org/svn/trunk/src/chrome/renderer/safe_b...

Second, this code and the logic it employs is really bull.

The world wide web is not a kiddie playground especially for a browser, and especially for a plugin whose's job is to detect phishing. The way Chrome's anti-phishing works is to use several foolish measures that mean nothing in the real world and then 'punish' and push websites into oblivion when someone crosses these arbitrary sets of rules.

The way the plugin appears to work is to look at various things * The type of URL (IP vs domainname, number of subdomains, size of the subdomain names, the strings in the Path URL) * Whether the page contains form data * Whether the page contains password input box * Whether the page contains checkboxes/radio boxes * Whether the page text contains some terms (in this case 'connexion') * Whether page has links/images to other domains

and so on.

None of these are ANY indication of phishing behavior and if this set of quackery based logic is what we see from Google Chrome, where else can we go to really feel safe and protected?


As much as I can understand you being upset that Chrome shows a warning for your site, I don't think that the approach they are using is unreasonable.

I'd take bets that those criteria show a correlation to phishy sites. Especially if you combine those metrics together.

Is it perfect? No. Does it produce false positives? Yes. Is it beneficial on average? I think so.

PS: Since you have found the relevant file in the open source project (or 'kiddie playground' - as you like to call it), why don't you supply a superior implementation with less "foolish" measures?


My point is that with an browser (similar to an OS), they cannot take things lightly and flag things left and right based on "heuristics". With great power comes great responsibility.

My point is that if you are going to design a system to identify bad websites it better be fail safe otherwise it is going to cause a lot of hurt.

The message shown in the browser for a phishing warning is the same as when a website has an invalid SSL certificate. The first is vaguely accurate, the latter is 100% accurate and no one is going to argue if the warning is needed. Both show the mind chilling warning no sane user will click through.

I am more interested in removing the phishing filter than in writing a phishing filter.

Anyways, with a 'closed' server component also in the mix, what option is there to provide any implementation.

IMHO, I think that doing things for the 'benefit of most' will lead to eroded freedoms for all over time.

PS: 'Supply a better implementation' is not an answer to writing poor code and hoisting on the world.


You are trivializing the underlying issue here. If the same thing happened in a physical world it will be a high profile public defamation case.

Browser is the window through people sees the world. That’s the reality we live in. In our target market, Google chrome holds 40% market share. Because of its stupid categorization, in one stroke Google harmed our reputation and the reputation of companies we serve. It is not a simple browser compatibility issue. Google chrome is telling the world our software is phishing software while we are not. What is the recourse here?

We don’t care what Chrome’s algorithms are. But the results are not factual and it harms our business. "One cannot escape saying hey that is our algorithm. We don’t do evil…" Remember.


Believe me, I am empathetic to the pain this is causing you. I can understand the anger you are feeling.

But I don't think that I am trivializing things. The fact is, that phishing sites are causing a real pain (as in millions of dollars lost by the victims, hundreds of thousands of computers becoming zombies, etc). All major browsers are trying to mitigate these risks by implementing phishing & malware filters. None of these implementations are perfect (you probably know a bit or two about bugs in software development).

But on average these filters have a positive ROI - especially for the target market (which is Joe WebUser and sadly NOT your company - or mine for that matter). The costs of a false positive ("I'll go & find that information on another site") far outweigh the costs of a false negative ("I put my login+password into this legitimate looking website and now I can no longer access PayPal").


@hiddenfeatures

Yes. Lets apply this everywhere. Lets electrocute folks based on "heuristics" because there are no other way to find out "bad guys".

It is nice to act as an arbiter and spout philosophy isn't it?

If you really do think that there is no other better way then I guess there is no more point arguing about this.


Or let everything go through until and unless we are 100% certain that it shouldn't. Like, if someone is pointing a gun at you, do not duck because there is a chance he/she will miss. Because you know, exaggeration is truly a great tactic to convince other stakeholders.


Even though this looks like a troll attempt, lets try this.

The problem is 1. No clarity on what constitutes a problem. 2. No way to officially contact to clear up a problem

resulting in possible irreparable loss of business.

So, if you insist on interesting and orthogonal "analogies".. please carry on.


I was NOT trolling. I was pointing out that (A) Exaggeration is not a great debating tactic, in your case it was a clear slippery slope argument (B) It will not help in convincing the other stakeholders into being empathetic with your situation because you equated them to mindless psychopaths.

> So, if you insist on interesting and orthogonal "analogies".. please carry on. If it was not clear, I was trying to describe a possible issue with you "lets apply this everywhere" argument.

The two arguments you just put forward, are nowhere close to what you said in the comment I replied to. Yes, there are issues with the current implementation of it, which is very similar to how spam detection/prevention systems work at the moment. Yes, there can be improvements to it. There can be improvements to everything. Yes there is high chance of false negatives in the current system, but this is a problem where false positives can be just as disastrous. If we cannot agree with that, then do not think it is worth continuing this discussion.

Now if you check the top comment on the thread, I believe the communication channels have already been set. They did not work for you as promptly as you would want them to, that's a different issue. But there definitely exists an official contact to clear up the problem - your colleague seems to be aware of it. The lack of clarity of the reasons has been marked as intentional and has been discussed elsewhere on the thread.

It was poor of me to use snark instead of clearly stating my stance, but the stupidity of analogy that you are blaming me for, is not much different from what I was trying to mock.


I agree (and have posted in this very thread) that having controls for detecting spam/fraud is good. Also, I have posted that the primary problem is, no way to either a) avoid this problem by adhering to some guidelines b) no way to directly contact the developer to figure out the problem to resolve it.

Every update to the browser can potentially change the model that affects a large number of the users and the only way to figure out the problem is using some sort of trial and error method.

This would have been fine if the product in question is a niche product or a exotic browser. But the fact of the matter is, with Chrome (being one of the dominant browser) and Google being the product owner, the reach of Google's opinion is far reaching and can easily destroy a product (akin to killing a person based on some assumption).

Also note that, the "communication channels" listed earlier were completely useless for this type of problem where the client side is throwing the error (Not related to a specific domain or even url included in the page).

Understand that, being a commercial product, ALL possible methods were tried (obviously) to resolve it by using those methods and could not resolve it. You can see that, this specific instance gets triggered by simply having a button with the name "Connexion" instead of "Login" (purely detected by backtracking the changes).

So the frustration is not meant to belittle Google's effort at combating spam/fraud but to point out the effect of such wide ranging blanket solutions.

While "Collateral damage" is a very nice way to de-sanitize and make things palatable for all parties involved except those getting to be the "Collateral damage".

At the end of the day, I am sure folks understand that Google being Google can do what they want and probably even bury the whole issue from getting any traction.


> even bury the whole issue from getting any traction.

Wasn't your site explicitly whitelisted?


Yes. I believe it has been added to a temp whitelist. But it is not clear if is domain based or somekind of signature based. If it is domain based, then whitelisting is not useful (This is hosted on various domains as noted by others). If it is signature based, it will be more effective (though the signature will change as the server code changes and since there is no idea what gets into the signature, there is no way to avoid).

Also, dev1.codelathe.com was re-setup specifically to trigger the warning (It was determined that if the login button has the keyword "Connexion", it was pushing the phishing score past 0.5)

The main thing is, if there is a clear way to contact the team responsible for this to resolve such issues, that will be the best way for anyone with similar problem and at this point there is no such avenue.


According to this[1], the classifier is intentionally obfuscated to prevent reverse engineering, so I don't know how far you're going to get with the log beyond just knowing your score (and the score can change if they change the model, which may explain why some people aren't seeing a problem (I see no phishing warning in Chrome dev channel, for instance)).

Since it looks like it's a model trained offline on known phishing sites, unfortunately I think your best bet is tweaking until you fall under the threshold and (if you're feeling magnanimous) filing a bug on Chrome with an example of how the current model is flawed (though if the page is working in dev channel, something may already have been fixed).

That sucks, sorry :(

[1] https://code.google.com/p/chromium/codesearch#chromium/src/c...


@alternize

That, precisely is the issue. We can only speculate as to what might be good or bad. There is no way to really know is correct (and infact why is it even the business of a browser to determine that). In a lot of situation, it is not something the product gets to decide. For example, this got triggered when a customer added translations to the application (which changes that button).

.


translation implies "other language". instead of still defining the page as being english, why not trying to define the page's language to the language the customer (allegedly) translated into?

in your case, non-translated "Account" and "Password" texts in a french corpus are most probably much much more common than a wrongly-translated french "Connexion" in an english corpus...

that said, i do not know if chrome is really considering the language or not, but i certainly would hope so. :)


most probably "connexion" in relation to "password" was used in an unrelated (english?) phishing attempt.

mixing different languages might be bad. try changing all the page text (and corresponding content-language header) to the localized language instead of just changing the submit button. maybe this have the classifier use the contextual meaning of "connexion".


Better format: http://pastebin.com/1NE2Tud8

Can you remove the "core" part of the url? => UrlPathToken=core = 1

You could try another file extensions for the page (don't now if this is possible with gwt). => UrlPathToken=html = 1

The "powered by" link seems also problematic => PageLinkDomain=tonido.com = 1


Interesting! I guess most phishing guys have this on their URL and in their page. Therefore it MUST mean that every page with those keys are phishing... just great.

Chrome user$ grep phish chrome_debug.log | grep -e "UrlPath" -e "PageTerm"

[5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlPathToken=html = 1

[5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlPathToken=core = 1

[5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageTerm=password = 1

[5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageTerm=connexion = 1

[5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: UrlPathToken=index = 1

[5579:1799:0701/133954:VERBOSE2:phishing_classifier.cc(192)] Feature: PageTerm=account = 1


The client downloads the current word list from this URL: http://static.iminlikewithyou.com/drawsomething/wordlist.csv

I've been meaning to see if I can run a proxy and substitute my own words.

EDIT: I personally use wordplay (http://hsvmovies.com/static_subpages/personal_orig/wordplay/...) with the -lx flags on this file when I'm stumped


Nice! Steve - can you use this word list instead? (http://static.iminlikewithyou.com/drawsomething/wordlist.csv)


Yep, the site includes that word list now :)


Wow, great find. Obviously using the actual list would be a lot more helpful since they often combine multiple words into one ("lavalamp", "musicbox") and use a lot of very current pop culture references ("lin", "ladygaga"). Definitely a unique dictionary.


You can use a proxy and substitute your words. Also take in mind that the format is <word>,<score (1,2,3)>,<premium word or not (0 or 1)>


Wow thanks for the link to the csv. I just pushed a new version that uses that list!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: