Hacker Newsnew | past | comments | ask | show | jobs | submit | acrefoot's commentslogin

This podcast explains the three theories and adds the Benign Violation Theory: https://podcasts.apple.com/us/podcast/whats-so-funny/id15545.... Their theory can be summed up as “something is wrong, but ok/safe”, but it takes some illustrating.

I liked this skittles study, pulled from the transcript:

MANDY: The skittles study goes like this. Participants go into a room to fill out a survey. There's a table and a bowl of skittles, and there's an actor posing as a research assistant. And the goal of the study is to figure out the funniest way the actor can offer the bowl of skittles.

They ran this test in lots of different ways, but in the first condition:

<TAPE> CALEB: The actor will say, I'm sorry to interrupt you, but I'm in a couple seconds. I'm supposed to offer you these Skittles. Would you like these Skittles?

MANDY: This isn't that funny.

MANDY: But then, things get a little weird. In another group, the actor just flings the skittles at them.

<TAPE> CALEB: Out of nowhere, they'd launch the bowl of Skittles and only afterwards say, I'm sorry to interrupt you, but I was told to launch the bowl of skittles.

MANDY: This is surprising but not that funny. It's a little messed up. In the last group though, the actor gives them some warning before it happens.

<TAPE> CALEB: They say, I'm sorry to interrupt you, but in a few seconds I need to throw this bowl of Skittles at you. Then they launch the bowl of Skittles on the participant.

MANDY: It's pretty funny.

<TAPE> CALEB: there was more laughter, certainly, in both of the throwing conditions, but there was a lot more laughter when the person was told the Skittles were gonna be thrown at them first.

MANDY: Under Incongruity theory, people should be more likely to laugh if there's surprise -- but that's not the case here. People are more likely to laugh when it's not a surprise, when they're warned before it happens.

<TAPE>

CALEB: That's because once you hear that and the Skittles get thrown, you know, it's part of the study,

PETER: You're prepared

CALEB: You're prepared! The point of this was to show that it's not about surprise and surprise in some cases actually hurts.

MANDY: So if it's not surprise that's making people laugh when skittles are being thrown at them, what is it? This is where Peter and Caleb's own theory comes in: what they call the benign violation theory.

It’s no great joke, but it serves as a nice tool to examine the theories of humor. The humor from this joke doesn’t depend on incongruity. Maybe a bit on the relief of tension, but it’s not a great explanation. And the scenario with surprise actually hurts here.


I still think there is a surprise element in the above exposition.

The surprise is the realization that we’re not supposed to throw around bowls of Skittles coupled with the unacknowledged realization that this rule is just some tiny, pointless restraint. The recognition and release of this restraint allows for joy! There’s an aesthetic beauty to a rainbow of candies flying through the air, akin to fireworks or streamers, yet in daily life we choose not to throw the bowl of Skittles because we’re more focused on the imagined cleanup than the active, lived experience.


It's still surprising because it breaks from expected social convention. And it gets progressively funnier as the examples go on. Ultimately the surprise is one of a person going through the motions of social convention, like the politeness of asking permission or warning before doing something anti-social. In the form of can you please hold my beer so I can slap you?

I think the surprise is the best theory for a root so far.


I guess you had to be there.


I really like the ideas in Annie Duke's "Thinking in Bets". The book would suggest that the rightness of a decision shouldn't vary much with the outcome (or with time).

The book suggests that we judge the quality of the decision based on how comprehensively we evaluated possibilities before we made the decision. Our habit is to re-evaluate the quality of the decision based mostly on the outcome, even if that outcome was one of the possibilities we considered. The book suggests that a good decision can't be judged by the outcome alone, as every decision is a bet with a variety of possible outcomes. (Adjusting priors for future decisions is related to this, and one should strive to allow adjusting priors in a disciplined way while not beating oneself up or congratulating oneself too much based on outcomes which were not certain going into the decision.)

You can luck into winning the lottery, and luck into really bad outcomes while investing into index funds, but that does not make playing the lottery a good decision. In fact two decisions that lead to the same outcomes should be respected differently, depending on how much work went into them and the quality of the decision-making framework(s) applied.


Whatever Clickup uses for their Clickup docs, it reliably gets into a confused state and data is lost, during which not all editors show the same state locally.


Tonic.ai seemed to fit that bill, but we ended up rolling our own ETL job due to cost concerns, and some security preferences for a simple to audit tool to do this. tonic.ai does it on-the-fly, which was merely a nice-to-have for this use case.


Any comparisons to https://www.tonic.ai?

> Based on policies you define, individual fields can be encrypted/decrypted... Are the policies something like "retool" gets tokenized or faked data back, and the main app gets everything? Or is it more granular even within the main app? Like can I teach JumpWire about my app's users and our AuthZ ruleset?

> or they partition the data by putting some fields in a data vault and others in the main database I was considering using VGS to tokenize sensitive data, but I prefer self-hosted and reasonably auditable code for such sensitive systems. Is that the case here?

> We’ve seen entire teams dedicated to just maintaining ETL pipelines for scrubbing PII into secondary databases!

I do this to make staging environments more realistic, which makes them double as debugging tools on production when you can't give engineers any sort of direct production access. We whitelist non-sensitive fields (most importantly foreign keys), and fill in the rest with faked data. The app looks like production, but if all the users were bots who were saying nonsense at each other. At my scale (50 person company), it works reasonably well enough with just me maintaining it.


Tonic is awesome! We think of synthetic data/differential privacy as a different use case - trying to replicate data across scoped environments while preserving certain properties or distributions of the entire data set. There is a security/privacy component from scrubbing the data, but the original data source is unmodified, and that's where we feel risk lies. And the desired outcome isn't to add security but to produce a data set that "looks like" the original well enough for testing/modeling/analytics.

> Are the policies something like "retool" gets tokenized or faked data back, and the main app gets everything?

Yep, that's exactly right. Application credentials are grouped under classifications, and policies can be included/excluded across classifications. We aren't passing authz through JumpWire but for something like Retool you can configure it to connect through different proxies for different users.

> I prefer self-hosted and reasonably auditable code for such sensitive systems. Is that the case here?

Exactly. The engine which interacts with your data is almost always self-hosted, and the web app also can be if needed.

> At my scale (50 person company), it works reasonably well enough with just me maintaining it.

Makes sense! No reason to add more tools to your stack yet if the custom process isn't too burdensome.


SICM has a similar complaint about ambiguous notation. See https://groups.csail.mit.edu/mac/users/gjs/6946/sicm-html/bo.... Footnote 2 in particular is worth reading for a particularly good example of problems caused by ambiguity.

SICM as a textbook is intended to be worked out entirely with a computer.


The quote in the footnote[1] says that `f` means something different on the two sides of the equation ∂f/∂x = ∂f/∂u ∂u/∂x + ∂f/∂v ∂v/∂x.

Why is this true? I think about ∂/∂x, ∂/∂u, and ∂/∂v as higher-order functions that take the same `f` as their argument. What is wrong about this thinking?

[1] https://groups.csail.mit.edu/mac/users/gjs/6946/sicm-html/bo...

Edit: I'm guessing the footnote's reasoning is that on the left hand side, `f` is thought of as `f(x, y)`, and on the right as `f(u(x, y), v(x, y))`.


The signature of the function on the left hand side is f0(x) and of the one on the right hand side is f(u,v). The latter function signature is not f(u(x),v(x)), that would be a very grave mistake (and it would make the chain rule useless).

On the right hand side, it is not lambda x: f(u(x), v(x)) that the differential operator ∂/∂u (or ∂/∂v) is applied on—because the former would be a function of x.

(I use only x rather than (x,y) for simplicity so you can see where a big problem would lie)

Be aware that the notation in ∂f/∂x = ∂f/∂u ∂u/∂x + ∂f/∂v ∂v/∂x is already very simplified and the actual whole thing would be (in somewhat more modern Leibniz notation--as opposed to the one in the footnote):

∂/∂x f0(x) = ∂/∂u f(u(x),v(x)) ⋅ ∂/∂x u(x) + ∂/∂v f(u(x),v(x)) ⋅ ∂/∂x v(x)

Where (as is usual) the (...) without space in front (sigh) denotes function calls, and the ∂/∂x f0 makes a new function (with the same formal parameters as the original f0).

Note that the function call is done on that new function (this order of precedence is the usual in mathematics). Also, the multiplication (⋅) is done after the function calls, on actual numbers. And the + here adds numbers.

Also, it is assumed that the two functions f0 and f are connected via the equation f0(x) = f(u(x),v(x)), which is implied. (the right one is called function composition and could be written f0(x) = fo(u,v)(x), but that's an entire extra chapter to define that)

Understanding what exactly is going on where in here is already the basis for fluid dynamics.

The footnote tries to have globally unique names, so you have funny things like u = g(x,y). Ugh. For programmers, it is normal that the formal parameters of a function f (those are u and v) and the global function u are different things, so we don't need to do what the footnote did.

For clarity:

  f0(x) := ...
  f(u,v) := ...
  u(x) := ...
  v(x) := ...
Note: There's also Lagrange's notation for differential operators--but in my opinion it's useless for functions of more than one parameter. There, the first-order derivative of f would be f'. Chain rule of our example cannot be written there as far as I know. The best you could do is f0'(x) = f???(u(x),v(x)) ⋅ u'(x) + f???(u(x),v(x)) ⋅ v'(x) where the ??? is what I wouldn't know how to write. Maybe f.x and f.y or something--but that's totally arbitrary.

Note: The names of the formal parameters have no meaning outside of the function (right?), so it makes sense to just have a "differentiate first argument" operator D1 and a "differentiate second argument" operator D2 instead. Then the chain rule is: D1 f0(x) = D1 f(u(x),v(x)) ⋅ D1 u(x) + D2 f(u(x),v(x)) ⋅ D1 v(x). Again, the function calls are only made after you got the new function from the D1 or D2.

Also, evaluating an expression like f(u(x),v(x)) works just like it would in C.

Be aware that the definition of u(x) itself cannot contain u (that is very important), only x. So f(u(x),v(x)), even if written in symbolic equations, will NOT contain u or v in the end, only x-es.

For example if u(x) := 5 ⋅ x and v(x) := 3 ⋅ x and f(u,v) := 7 ⋅ u + v, then f(u(x),v(x)) = 7 ⋅ 5 ⋅ x + 3 ⋅ x. No u or v left.


There was a pretty fun Radiolab episode (https://www.wnycstudios.org/podcasts/radiolab/episodes/91569...) that suggested that the process of recall changes memories, and that they are susceptible to change while the memory is being recalled. I'm sure ECT is disruptive in many ways, including just through damage and toxicity to the cells. But, in your fuzzing idea--a hypothesis might be that recall is triggered, and while the memory is susceptible, it's disrupted permanently. Another possibility is that the memory is still there, but the proper mapping to it is lost. Reality is probably a combination of many factors.


I'm not sure exactly what you believe, but since you're talking about information storage, can I ask if you mean for working memory or for long-term memory? For long-term memory, the storage is durable and mostly static, while recall is dynamic (and may even affect the storage after further consolidation). For working memory, I imagine it's a pretty dynamic process for both storage and recall. The top-level link seems to only be about working memory, so the title of the HN post should probably be changed to avoid confusion.


The headline says working memory, not long-term episodic memory, but the comments (and headline) here don't seem to distinguish between the two. We can disrupt short term memories in a variety of ways. When memories consolidate, they move to different structures. We can hold a lot more information in long-term memories than in working memory.

Because I hear this hypothesis come up from time to time--it's unlikely consciousness or long-term memory are maintained by e-fields generated in the brain, or that they require continuous electrical activity. Stark evidence against it come from ischemia studies. Ischemia usually accompanies serious underlying issues, so more controlled examples are easier to reason about: surgeons may use deep hypothermic circulatory arrest as protection for the brain in procedures with extended ischemia time requirements. In these cases, achieving electrocerebral silence (flat EEG) is one of the checklist items for the procedure. Clinical cases are worth reviewing for any hypothesis that suggests that long-term memory depends on continuous electrical activity* of neurons in the brain.

Since the events of ischemia, brain flatlining, and brain death are so closely linked in time, it's easy to conflate them. After a cold stop+start, the brain doesn't immediately jump back to normal function--there are a variety of processes that are worth studying better to get back to normal brain waves and brain function. Ischemia-related damage is often from a metabolic problems than from a discontinuity of electrical function. The reason for the cooling is to maintain the local energy reserves of the cells--when that is lost, the cells may have too much difficulty getting back to normal when blood is later reintroduced, and that's where you see brain damage or death. This kind of procedure is not without serious risk factors.

* Of course, electrical activity is needed for recall, but the point that I'm making is that the memory is later available for recall even after a period of discontinuity in the EEG.


If it were electric fields, wouldn't we also expect the extreme EM fields generated by MRI machines, or even strong permanent magnets held near someone's skull to have some effect on memory?

A nearby lightning strike or overhead power line should also have some effect.

It would be quite amazing if the body were able to neutralize/counter such strong external electric fields.


Good observation. I would not be surprised if the answer to your question is, Yes. An MRI does have some effect on currently active memories and thought. It seems to a non-scientist like me that it would be difficult to use the usual tools to measure this inside an MRI tube, so maybe a hard question to answer for even the experts in measuring such things.

Plenty of folks who do not consider themselves claustrophobic just completely fritz out inside an MRI tube. Could their usual ability to reason about and maintain composure be reduced by just such interference?


The research suggests the EM fields and underlying neural networks work together, i.e fields might provide a top-down energy-based attention mechanism while the network structures implement bottom-up agglomerations of information. Blocking EM fields wouldn't destroy memory but might pause or disturb the learned activation flows of neurons.

""" The researchers hypothesize that the field even appears to be a means the brain can employ to sculpt information flow to ensure the desired result. By imposing that a particular field emerge, it directs the activity of the participating neurons.

Indeed, that’s one of the next questions the scientists are investigating: Could electric fields be a means of controlling neurons?

“We are now using this work to ask whether information flows from the macroscale level of the electric field down to the microscale level of individual neurons,” Pinotsis says. “To make the analogy with the orchestra, we are now using this work to ask whether a conductor’s style changes the way an individual member of an orchestra plays her instrument.” """


It's possible that the field is a good proxy for underlying coherence in a redundant neural subnet that may be harder to measure directly. The field itself may or may not play a causal role---that would depend on whether the field could induce or reinforce neural activity. Even if the electrical field plays a role in marshaling the necessary neural activity, the representation must come to exist in the neurons in some way, since they are what are connected to the muscles that carry out intention.


> The headline says working memory, not long-term episodic memory, but the comments (and headline) here don't seem to distinguish between the two.

Please note, this is the press release headline and sub-headline:

“Neurons are fickle. Electric fields are more reliable for information.”

“A new study suggests that electric fields may represent information held in working memory, allowing the brain to overcome “representational drift,” or the inconsistent participation of individual neurons”


Is that really incompatible with information being held in e-fields? It's just that these fields are generated by group of neurons, which can indeed be "stoped and started back"?


In that case it’s unclear why you’d not refer to it as the information being encoded in the neurons. I don’t think anyone interprets that to mean the information is retrievable sans electrical field.


Forgive me, because I still am unsure: can a bot in Slack be given permission to create channels, read from, and delete only the channels it creates?

Or must one give the Axolo bot permissions to all public channels in a slack workspace?


Unfortunately, Slack does not have that level of granularity yet, so the permissions are given to all public channels. But we only interact with Slack saving the ID of the specific channel we create, we do not store any information from other channels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: