Hacker Newsnew | past | comments | ask | show | jobs | submit | richo's commentslogin

Explain to me how a VPN makes your connection faster.


Not related to this service but for VPN in general, it does this by preventing your ISP from throttling specific sets of your traffic. If they know you're downloading a torrent, they'll lower the speed. They don't know what you're doing via VPN because it's all encrypted. For an example, Verizon would limit Netflix streams at SD rate but if you watch Netflix via VPN, you would see SuperHD or 1080 rates because Verizon doesn't know what you're doing via VPN.

Of course, if you're using an ISP that doesn't throttle and follows network neutrality properly, than VPN won't increase the speed for you.


This is a well documented problem and the crux of the net neutrality debate:

http://mattvukas.com/2014/02/10/comcast-definitely-throttlin...

Netflix buys backbone access from a backbone provider, the ISPs then have a connection with that backbone providers. The pipe from that provider to the ISP gets congested and there is an argument as to who should pay.

However the pipe to your VPN data centre and Netflix's backbone provider are congestion free so you get a faster rate.

Internet routing isn't as dynamic as we imagine and whilst Netflix offer to put caching boxes in the ISP network, the ISP's are not keen on that solution.

End result you are performing a manual re-routing of data to bypass congested links.


Compression is one simple answer. Another is that any server in a datacenter is going to have much better routing and connectivity than a residential connection. If you have good (uncongested) routing to one VPN server, you can use it as alternative to your ISP's network.

Some gamers use OpenVPN, with encryption and compression disabled, which functions as a very low latency UDP tunnel.

Home -> Good ISP routing -> VPN server -> Game server (20ms RTT)

Home -> Bad ISP routing -> Game server (60ms RTT)


Right, but the traffic still passes through your residential connection, so you're still limited to the bandwidth provided by your ISP; if anything, the overhead of VPN will decrease internet speeds there.

In other words, a VPN isn't an alternative to an ISP's network, but rather an additional system on top of it.

On the other hand, a VPN will generally bypass an ISP's own DNS servers, which could afford some speedup when performing domain name resolution/lookup if the ISP's nameservers are sluggish (though configuring your system to use alternate nameservers would do this without the overhead of a VPN).


If your ISP provides only a congested route to say, Netflix or YouTube, then a tunnel through a better connected VPN server can allow you bypass that route entirely, increasing throughput and lowering latency.

In the case where an ISP provides optimal routing, it cannot improve latency. Compression and buffering, among other things, still may offer better throughput.


True. In that case, though, the better advertising approach would be to state that directly rather than vaguely (and inaccurately) claiming a "20x" boost in download speed.


If you believe that "don't roll your own crypto" is some kind of absurd mantra the security industry uses to keep us in business, I recommend that you roll your own crypto, and keep us in business.


Schneier's Law comes to mind

"Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break."

https://www.schneier.com/blog/archives/2011/04/schneiers_law...


Oh, that's what that is!

I had found a similar thing making my own puzzles, "It's easy to make a puzzle you can't solve, but it's hard to make a puzzle that's fun."

Well, that' not entirely the same thing, but they overlap I guess.



I would compare it to a writer's inability to spot typos in something they wrote themselves. Probably, if the same algorithm was written by someone else, they could tear it apart easily.


I do not have concerns with the mantra itself just it's usage and the entitlement that often comes along with using it. The top answer on this Stack Exchange question is a good example of what I believe to be proper usage of the mantra. http://security.stackexchange.com/questions/18197/why-should...


How far should we take that maxim? It implies that no one should ever attempt this, but that leads to nothing new (unless you are first recognised as a crypto guru -- but how could you become one?).

I think it's worth drawing a distinction between the algos/maths and attempts at implementations. Otherwise we wouldn't have things like OCaml-TLS and others.

http://openmirage.org/blog/introducing-ocaml-tls


Implementations are even more sensitive to tiny bugs with huge consequences than algorithms. It's fine for people to write their own implementations if they're never used, but anything that will be used needs a large number of experts and a large amount of time before it should be trusted.


Maybe people can show new stuff they make on HN etc before using it in their apps.


HN is not and never will be an appropriate stage for cryptographic review.


it's like Pascal's Wager for security


* Add figure ALL=(ALL) NOPASSWD:ALL to sudoer file.

This is cute.


That's for deleting apps, since docker volumes may produce something that only root can access. Haven't found ways to get around it, any suggestions?


I assume you mean data outside the container?

Maybe you could at least silo the utility that needs to delete so that it can easily be inspected and so you don't have to trust the whole program.


Exactly. A setuid `delort-container` utility would be a good start.


Uhhh, except that this in no way actually implements CSP, given that more than one actor can just hang onto a resource despite having passed a pointer to someone else?

Who's meant to deal with freeing? Do you necessitate GC?

It's also fairly distasteful that all type safety is discarded.


Thanks for the feedback!

I didn't mean to imply that this project even remotely implements CSP. What I mean to say is that this library is an implementation of the channel primitive associated with CSP. I mentioned CSP to give the project some context outside of Go.

And yes, the pointer can still be mucked with after sending over an eb_chan; perhaps I should call them "cooperative channels," meaning the sender agrees to cooperate with the receiver and not modify the data after sending.

> Who's meant to deal with freeing? Do you necessitate GC?

eb_chan instances are reference-counted, so any entity that needs to ensure its existence should retain the channel, and release it when it no longer requires its existence.

I decided to make eb_chan a reference-counted type instead of using an alloc/free pattern because the former is much more flexible in a multithreaded environment. Furthermore, the RC pattern is a superset of the alloc/free pattern, so if you prefer to use it as such, just never retain the instance after creation.

> It's also fairly distasteful that all type safety is discarded.

I agree, but so far I haven't found a better way given that the target language is C. I've thought about using some preprocessor magic to allow defining of custom eb_chan types, I may have to revisit that.

Thanks again!


Cheers for the response.

My question about deallocation was regarding the resource though, not the channel.

Honestly, while from a best practice sense it shouldn't be too hard (You need to either send or free a resource when you're done with it), in practice my intuition is that codebases using this will be rife with exciting use after free bugs.

Seperately, have you done any testing to see what valgrind makes of it?


It seems that touching any surface causes instant death?


Nope, try harder ;)


I got onto the second screen and ragequit.


that's what mapping an -rwx page at 0x0 does, and as a result it segfaults, which is an access violation.


There is some subtlety about dereferencing a null pointer. Many languages (C, C++, Rust) state that *NULL is undefined behaviour, that is, the compiler can assume that it never happens and optimises based on this. This can lead to a "misoptimised" program that doesn't actually segfault when the source suggests it should.

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


No, you are wrong.

First, mapping that page doesn't cause all null pointer dereferences to segfault.

And second, the language doesn't require a segfault. In fact, it explicitly permits the implementation to do whatever it likes.

That is the difference between safe and unsafe. It is in the language definition.


"the language"... which one? AIUI, Go doesn't state that null dereferences are undefined behaviour, but rather that they are guaranteed to panic.


"The language" ?

C doesn't define behaviour of a null deref, but most compilers map a -rwx page there to ensure that attempts to deref fault.

In what circumstance do they not?


The compilers don't map pages there, the operating system does. The problem is the compiler will optimise assuming that a null deref never happens, so you can have source that looks like it should crash due to a null deref, but the compiler has "misoptimised" it to have very different behaviour.

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


...I'm talking about throwing an exception or something similar, not getting a segfault. Like in Java.


Scroll from the top to the bottom (or vice versa) really hard.

Watch the "3d layout engine" flip out thoroughly (tested only on chrome stable on darwin)


Try grabbing the scroll bar with your mouse and pulling it down the page. Nope, apparently scrollbars aren't for that.

This doesn't seem to be designed with regular desktop usage in mind.


I don't know why but it seems to be perfectly acceptable to break the scrollbar these days.


I don't think it is. Famous is the first I've seen to complement reimplement (and fuck up) something as fundamental as scrolling.


What about the countless sites that have infinite scrolling (Facebook for example)? It's not as bad as Famous but still breaks it in my opinion.


Infinite scrolling doesnt break scrolling, it breaks backwards/forwards navigation.


It does both (infinite scrolling is a horrible idea). If you try dragging the scrollbar it will jump around every time more content is loaded and you lose your position on the page.


With that bounce-back effect it doesn't seem like it was designed with usage in mind.


I think it might be one of those "features" that someone thought would make the site look cool, but really just pissed off the users.


Apart from the slow pace at which the page scrolls back, the fact when it's doing so you cannot scroll up anymore (or down when you're at the bottom of the page) annoys me even more.

What Apple did, making scrolling behave like a sheet of paper confirmed using a rubber band, seems more natural than what this "3d layout engine" is doing. Giving it even more thought, it makes sense that even on a non-scrollable page scrolling (giving some form of input) should at least cause a confirmation that it recognised the action you preformed.


I also noticed that on my phone (Nexus 4) sometimes the scrolling just keeps going. I'm not sure that's how they meant it to behave, but even if it is, it's not great.


I was very sad to discover that most of the hits (I only checked one page) for slash on stackoverflow are actually questions involving the slash character (Generally http request parsing)


Developers who use popular language `Visual Basic` for their Github projects mustn't ever have any problems as they don't ever ask questions on Stack Overflow. Oh wait, it's tagged differently there, `vb6` and `vb.net`.


Yep the tagging needs to be worked on, julia get tagged as julia-lang on SO. And I am sure many of the languages ranking high on GH but very low on SO will have the same issue.


I dropped out super early. Worked for some software companies, built some internets.

Trying to get a US visa now, the decision to not get a degree is pretty bitter. That said, this is the first and I optimistically anticipate the last time a degree would have been any use to me, so I'll just power through and hope for the best.


For h1bs, most areas do almost require someone competent to have gone through a degree program in that field. You can't exactly self learn biochemistry or semiconductor process engineering because of he the tremendous equipment needs.

Software is a huge anomaly because it is so ear easily self learnable. The supposition that someone good at it needing a degree is patently false. It's really a shame. I know a few people with things like psychology degrees who are proficient programmers, and their path to a visa is not straightforward :(


For l1b there's no actual visa requirement, but the burden of proof for eligibility is still set incredibly high.

I understand to some degree the rationale behind the absurd process, at the same time it's incredibly frustrating to have a company that wants me here, and no avenue or realistic end in sight to make it happen.


L-1B is intra-office transfer. You still have to prove "skill", there's just no degree requirement. If you're a specialized worker (L-1B) or manager (L-1A) and have been so for over a year for your sponsoring company (a requirement for L visas), with a full salary, you presumably have the skills.


Yeah, trust me on this; they don't presume shit.

So far there's a pile of documents about 30cm high, and that hasn't yet been enough to appease USCIS.


Just curious: what country are you from? I'm from Denmark and my L-1B process wasn't nearly as bad; certainly not as bad as my IR1 which is still ongoing--but maybe the fact that I have an IR1 application helped with the L visa somehow.


I'm from Australia.

We basically have a gimme with the E3, but it has the degree requirement.


Interesting. A couple of years ago a company was going to move me to the states on an L1A (executive transfer); even though I was "only" the tech lead they seemed confident it would be fairly easy. Then again, they then proceeded to go bust in the GFC so what would they know.

Any hope of being transferred under that category instead?


The problem with L-1A is that you're expected to directly manage a significant amount of people. As far as I understand, it's harder to get, but it is extensible up to 7 years, compared to only 5 years for L-1B.


I see. Curious.


That's probably true for a largish percentage, but it's a huge generalisation that does nothing to actually help OP with his question.


Well the advice here would be to reflect if they wanted to drop out of college, because it's not challenging or because it is too challenging.


Or because it's too expensive. Or it's getting in the way of more important things. Or too stressful. Or someone at the uni is making their life miserable. Or one of the other 10 million reasons that people drop out.

To presume that this decision is entirely motivated by how hard uni is is astonishingly naive, imo.


Oh damn, I wanted to reply to the dropout/billionaire comment. sry!


Oh right. As you were :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: