Hacker Newsnew | past | comments | ask | show | jobs | submit | sampsonetics's commentslogin

Let's be careful with the phrase "open office plan". Many articles lump cubicle farms together with partition-free spaces in the same category, but whenever they're treated as separate choices the research tends to show cubicle farms being worse than partition-free spaces on most dimensions, including noise distraction -- i.e. private offices > partition-free spaces > cubicle farms. Interestingly, this particular article mostly avoids this distinction but does address it in the final paragraph:

"There are solutions, says Cornell's Hedge. The trend toward open offices and hard office furniture makes noise distraction worse, so adding carpet, drapes and upholstery can help. He recommends, perhaps counterintuitively, getting rid of cubicle walls, which provide the illusion of sound privacy, but actually make people less aware of the noises they create."


Reminds me of my favorite bug story from my own career. It was in my first year or two out of college. We were using a commercial C++ library for making HTTP calls out to another service. The initial symptom of the bug was that random requests would appear to come back with empty responses -- not just empty bodies, but the entire response was empty (not even any headers).

After a fair amount of testing, I was somehow able to determine that it wasn't actually random. The empty response occurred whenever the size in bytes of the entire request (headers and body together) was exactly 10 modulo 256, for example 266 bytes or 1034 bytes or 4106 bytes. Weird, right?

I went ahead and worked around the problem by putting in a heuristic when constructing the request: If the body size was such that the total request size would end up being close to 10 modulo 256, based on empirical knowledge of the typical size of our request headers, then add a dummy header to get out of the danger zone. That got us past the problem, but made me queasy.

At the time, I had looked at the code and noticed an uninitialized variable in the response parsing function, but it didn't really hit me until much later. The code was something like this:

  void read_status_line(char *line) {
    char c;
    while (c != '\n') {
      c = read_next_byte();
      *(line++) = c;
    }
  }
Obviously this is wrong because it's checking c before reading it! But why the 10 modulo 256 condition? Of course, the ASCII code for newline is 10. Duh. So there must have been an earlier call stack where some other function had a local variable storing the length of the request, and this function's c variable landed smack-dab on the least-significant byte of that earlier value. Arrrrgh!


That sounds shockingly like a bug I remember one of our best developers finding when I worked at Homestead in the late 90's (and I remember being in awe then of his ability to deduce the pattern out of the seeming randomness).


I also like your wording in terms of the codebase drifting from the team's understanding of the problem, which is a drift that is important to be continually mindful of.

However, to the point of your blog post, there have definitely been others talking about when "technical debt" might be okay.[1] A big challenge is not only that people disagree on when debt is present, there really are multiple completely different definitions, and while some look a little like debt many don't behave like debt at all.

[1] http://agilefocus.com/2012/07/02/technical-debt-the-good-the...

(I wrote that post, but it also has lots of links to other people talking about different definitions of technical debt.)


I wrote this article, at carsongross's urging, as a follow-up to the discussion around his earlier article, "Rescuing REST from the API Winter".[1]

[1] https://news.ycombinator.com/item?id=10941616


Quoting from the court order: "Failure either to submit a Objection Form or letter to the Court by April 1, 2016 will be deemed a waiver of your right to object to the disclosure of your or your child’s protected personal information and records as described above."

And yet it doesn't actually say that "objecting" will prevent them from disclosing anything.


(I wrote a fun little essay about Press & Dyson and Adami & Hintze for my scientific writing class last summer. The assignment was to write an essay of no more than 1000 words, intended for a non-scientific audience. Not sure if pasting it here is too much for HN etiquette, but it's not published anywhere else, so here goes. I hadn't read the article linked above when I wrote this, but it covers very similar ground in much simpler terms. Disclaimer: I didn't actually interview any of the researchers for the purpose of this short class assignment, so I was speculating a bit about their subjective states!)

Christoph Adami and Arend Hintze didn't believe what they were reading. The two evolutionary biologists suspected that something didn't quite add up in a 2012 article by two of the most influential physicists of our time, William Press and Freeman Dyson.[1] Press and Dyson claimed that they had discovered a mathematical trick that flew in the face of results from three decades earlier, which had so far stood the test of time.

The subject of their argument was a paradoxical "game" called the Prisoner's Dilemma. This game was invented half a century ago to challenge the tenets of game theory, a branch of mathematics beloved by economists and political scientists. In the Prisoner's Dilemma, two players face an awkward situation in which they cannot communicate but their choices affect each other. Each player must make a simple choice: Will you "cooperate" or will you "defect"?

The game is set up so that it presents a common dilemma faced in real life. The players will be better off if they both cooperate than if they both defect. But either player can gain an advantage by defecting when the other player cooperates. With every move, you are tempted to take advantage of the other player and afraid that they will take advantage of you, even though you both know that overall it's better to cooperate.

The Prisoner's Dilemma is reminiscent of any situation in which two individuals will share equally in the benefits of a shared activity. Cooperating then means giving it your all and defecting means slacking off. If you both cooperate then you both benefit from your hard work. If you both defect then there are no benefits to be had. Whenever the other player cooperates, however, you are better off defecting in order to benefit from their effort without exerting any yourself. Likewise, whenever the other player defects, then once again you are better off defecting yourself because it's not worth your effort to support them.

The paradox of the Prisoner's Dilemma is this: If it always seems better to defect, no matter what the other player does, how do we ever end up cooperating? This question, and therefore the Prisoner's Dilemma itself, is of prime interest to economists, political scientists, biologists, and many others. The standard answer from game theory was that a "rational" player would always defect, but some researchers suspected that cooperation might actually be favored by evolution in the long run.

In a crucial paper in 1981, political scientist Robert Axelrod and evolutionary biologist William Hamilton reported on a Prisoner's Dilemma tournament that Axelrod had conducted.[2] Axelrod invited dozens of researchers to submit computer programs that would play the game thousands of times. Hoping that the tournament would provide insights into how cooperation might evolve, he contacted one of its participants, biologist Richard Dawkins, who introduced him to Hamilton,[3] a biologist at Axelrod's own university who had published a research paper of his own about the Prisoner's Dilemma.

In the preceding decades, discussion of the Prisoner's Dilemma was largely philosophical. It was an interesting ethical puzzle, intriguingly simple and yet maddeningly difficult to reason about. With his computerized tournament, Axelrod advanced Prisoner's Dilemma research from mere philosophical argument to proper scientific experiment.

The result was stunning. The submitted programs displayed a wide variety of strategies -- some preferred to cooperate, some preferred to defect, some tried to outwit their opponents. But the winner was one of the simplest programs of all, called Tit-For-Tat. Its strategy was to always start out cooperating. As long as its opponent likewise cooperated, Tit-For-Tat happily went on cooperating, too. But as soon as its opponent defected, even for a single round, Tit-For-Tat followed along. Its choice for each move was whatever its opponent chose for the preceding move.

In Axelrod and Hamilton's summary of the tournament, they noted that all of the top programs shared Tit-For-Tat's preference for starting out cooperatively. The more aggressive programs effectively lost the trust of the others and missed out on the benefits of cooperation. Cooperation was vindicated once and for all. Axelrod and others have repeated the tournament with similar results.

Hence Adami and Hintze's skepticism reading Press and Dyson's claim, 31 years later, of discovering a new mathematical trick for playing the Prisoner's Dilemma along with a rigorous proof that it should beat the classic cooperative strategies. It was a fascinating mathematical theory, but how could it have gone unnoticed before?

Press and Dyson called their discovery "zero-determinant (ZD) strategies" because the technique involves causing a mathematical quantity known as a "determinant" to be zero. By manipulating this determinant, one player can limit how well the other player performs. The ZD player can choose a particular score for its opponent or can "extort" an unfair score for itself, averaged over a large number of rounds. Press and Dyson seemed to have overthrown cooperation as the star of Prisoner's Dilemma research.

ZD strategies came as a surprise to game theory experts, including Adami and Hintze. Press and Dyson's analysis was mathematically solid, but since it flew in the face of Axelrod's tournament results, something was missing. Adami and Hintze decided to run tournaments similar to Axelrod's, pitting ZD strategies against cooperative strategies to see how they evolved, publishing their results in 2013.[4]

Sure enough, coercive ZD strategies excelled in one-on-one competition but fared poorly in the tournament. Why? In a tournament, ZD strategies are competing against each other as well as non-ZD strategies. Just as Axelrod had learned three decades earlier, cooperative strategies reinforce each other while non-cooperative strategies are marginalized. Cooperation is vindicated once again!

That's not quite the end of the story, though. It turns out that ZD strategies can be generous as well as coercive. ZD players can influence their opponents to accept good scores as easily as bad scores -- and generous ZD strategies do just fine in a tournament setting. In fact, you're already familiar with one such strategy: the classic Tit-For-Tat itself is the "most fair" of all ZD strategies!

[1] "Iterated Prisoner's Dilemma contains strategies that dominate any evolutionary opponent," Proceedings of the National Academy of Sciences, volume 129, issue 26, pages 10409-10413.

[2] "The evolution of cooperation," Science, volume 211, issue 4489, pages 1390-1396.

[3] This introduction via Dawkins was described by Axelrod in 2012: "Launching 'the evolution of cooperation,'" Journal of Theoretical Biology, volume 299, pages 21-24.

[4] "Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything," Nature Communications, volume 4, article number 2193.


Nice article! Well-written and informative, tells a story, and uses simple, clear language. Hope you got an A. :-)


I like the text

> involves causing .. [the] "determinant" to be zero. By manipulating this determinant

can't have it both ways :)


Altran is a major contributor to SPARK these days [1].

I've been a big believer in TDD for many years (and still am), but reading a lot of Dijkstra lately I started to feel guilty for not keeping up with the state of the art in formal techniques. :) So I've been looking around in just the last couple of weeks to figure out what the best tools are these days. This discussion has been very helpful.

[1] http://www.spark-2014.org/contributors


> But the magic of HTML is in the browser where you click around. Having a standard for form equivalents and links in a JSON API doesn't pack the same punch because there is little purpose to have freeform consumption of an API the same way that we have freeform consumption of web pages by humans.

I think that's the whole point of the original article: If we're creating an interface for human consumption (a UI), then REST is eminently relevant, and we shouldn't forget the power of HTML-over-HTTP as its most widely-supported realization. But if we're creating an interface for programmatic consumption (an API), then REST is more problematic and we shouldn't feel guilty for not achieving its ideals.


I haven't used intercooler.js myself, but I think it's a really beautiful example of REST's conception of "Code-On-Demand": It's not just downloading an RPC-style client into the browser, it's enhancing the browser with a richer hypertext implementation. Since hypertext is the core of REST, intercooler.js is embracing and enhancing REST itself in the process.


I was under the impression that iOS and Android both support HTML right out of the box. Do they no longer bundle Web browsers, or are you suggesting some other drawback that is critical to your particular application?


Not for mobile apps. An app can display html via a web view but that has already proven to be a terrible experience.

There's a reason JSON APIs have become more popular. Besides, clear separation of data from view means the data can be used by other third-party services.

The future of mobile apps will be native client-side view rendering from SPAs like React Native, Angular2 Universal. Not serving HTML views.


I assume they are using the same JSON API for their client side web app and their native mobile apps. Is there a compelling argument to move template rendering to the server if the native applications don't render using a HTML DOM?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: