- I agree that something like this is necessary or the whole model of the internet will be broken, like Matthew Prince [explained in this video](https://www.youtube.com/watch?v=H5C9EL3C82Y).
- Their approach seems very imperfect, but I understand that you have to start somewhere.
- They are paying per crawl… but in fairness it should really be per usage. It’s like paying music artists once when they upload to Spotify rather than per-play -- even though one artist gets zero plays and another gets ten million. Sure, the idea is crawlers will bid more for the popular content author, but what if a nobody author has a one-hit-wonder piece of content. They’ll still just get a couple bips per crawl and then the cat is out of the bag.
- One solution to this would be requiring a GDPR-style forget mechanism, where the author is granting a limited-duration license for the content (say… one week), after which it must be deleted and re-licensed. This would be a huge fix for the whole thing… and the more I think about it the more I think it’s essential for this to work.
- The auction mechanics are biased to the crawler… if there is a spread between artist price and crawler max price, then the crawler pays the lower price set by the artist. It should be the average.
- They will need to provide content authors with analytics about the pricing mechanics for the bids the crawlers are making.
- If this whole thing works, then products that optimize bid mechanics on behalf of authors will be a big growth industry.
- If Cloudflare are setting themselves up as the clearing mechanism for payments, that’s far too much power and profit for one company. It’s even worse than the Google monopoly. Somehow the payment mechanics need to be democratized.
I’ve used ChatGPT as an editor and had very good results. I’ll write the whole thing myself and then feed it into ChatGPT for editing. And then review its output to manually decide which pieces I want to incorporate. The thoughts are my own, but sometimes ChatGPT is capable of finding more succinct ways of making the points.
I generally make sure I use diff tools for that type of task, because LLMs are really good at making subtle changes you don't easily notice that are wrong.
> In my humdrum life, the daily battle hasn't been good versus evil. It's hardly so epic. Most days, my real battle is doing good versus doing nothing.
Wow. This part really resonated with me. I will try to keep this in mind.
I literally just copied and pasted that exact part into my notes. Something about "versus doing nothing" really hits hard. I can do more than nothing! And also, sometimes, I need grace and time.
Interesting. I’ve noticed this happening for me but I thought it was because my fingertips are calloused from playing guitar. But I’m also in my late forties. So it’s probably a double whammy for me.
WordPress doesn't have employees. It's open source software. Do you mean Automattic (i.e., wordpress.com) employees or WP Engine employees? Or just anyone who is employed and working on WordPress?
I think this is correct. Some people were misaligned, but the majority seemed to be taking advantage of a generous severance for different personal reasons. And anyone on a pip would have a hard time turning it down.
if you've got the resources this is a great way to clear out the dead wood. Most orgs either make life so unpleasant the person leaves to escape, or do nothing and let the poison kill other parts of the company.
Not sure why this is getting downvoted. The idea that the act of observation impacts an experiment (or how particles behave) is one of the most counterintuitive and surprising “truths” I’ve ever heard. I would love to hear a logical explanation of why (not just a description of it).
Observation doesn't impact experiments. Interaction does. In fact, it is quite difficult to formulate the "collapse" of the wavefunction as a physical interaction and to the extent that we can, the experimental evidence seems to suggest that it is not. This is a common misconception about quantum mechanics, partly because even undergraduate texts conflate the uncertainty principal with observation.
The logical explanation: "observation" has nothing to do with conscious woo, it's just that in order to have a definite answer we build experiments so they collapse the wavefunction.
It's like asking someone on a date: maybe they were in a superposition before, but now they have to answer, and having answered ("been observed"), that answer is highly likely to stay constant in the short term.
(when you think about it from this point of view, it's classical physics that's counterintuitive: why should we expect that asking questions about one projection of state doesn't affect the answers we get from later asking about others, not even in the slightest?)
The point I was trying to make is that if we are indeed in a simulation, and I'm not saying that we definitely are, but if we are - one possibility to design such a simulation in a way to make it more efficient is to actually make computations depend on the observer, meaning that sorry, but in this case it would have conscious "woo" built in.
Just in the same way as that only visible from current perspective objects are being drawn on a frame of a 3D game.
Currently unobserved parts of the simulation might exist in different form.
It's okay to disagree with simulation theory, but it is a perfectly valid possibility according to everything we know.
Personally, I don't think it's the only possibility, but i think it's quite probable and should be taken seriously.
One problem is that gravity is universally coupling, so no part of the universe is technically "unobserved." I suspect that we could look back at the dynamics of large scale systems and see deviations from GR if the simulation were neglecting any part of the universe in the absence of observation/interaction.
If I were building a simulation I would just have not made gravity universally coupling because it makes it hard to chunk reality up into parts. Thus it seems like the universal coupling of gravity is evidence against a simulation hypothesis.
The reason for my personal choice to not take simulation theory seriously is because simulations are an instance of Russell's Teapot. Anything which can be explained as S simulating T can be explained more simply as just T (or, in the opposite direction, even more complicatedly as R simulating S simulating T, etc. Can* we go all the way to a countably infinite tower of simulations?).
* if yes, then I'd have to admit that the omega-tower could be as interesting to study as the 0-tower, but if no then I'd maintain the 0-tower is way more interesting than any of its successor towers.
If you were to believe the universe was a simulation, would you do anything differently? could you?
(this line of approach, less formal and perhaps more congenial to CPS' other culture, is inspired by Dewey and James' pragmatism, in which philosophical problems are only well-posed if they have "cash value". We hackers don't make comparisons without subsequently using the flag value; they didn't ask questions whose answers are moot)
If we were in a simulation, due to the hard problem, it would be impossible for the simulators to know whether anything in their simulation had qualitative experiences, so they could not make conscious observation a prerequisite of detail rendering, only interaction. No woo necessary.
New generations build onto the scientific knowledge of previous generations. It may not be fast but that sounds like recursive improvement to me. It seems reasonable for AI to accelerate this process.
- I agree that something like this is necessary or the whole model of the internet will be broken, like Matthew Prince [explained in this video](https://www.youtube.com/watch?v=H5C9EL3C82Y).
- Their approach seems very imperfect, but I understand that you have to start somewhere.
- They are paying per crawl… but in fairness it should really be per usage. It’s like paying music artists once when they upload to Spotify rather than per-play -- even though one artist gets zero plays and another gets ten million. Sure, the idea is crawlers will bid more for the popular content author, but what if a nobody author has a one-hit-wonder piece of content. They’ll still just get a couple bips per crawl and then the cat is out of the bag.
- One solution to this would be requiring a GDPR-style forget mechanism, where the author is granting a limited-duration license for the content (say… one week), after which it must be deleted and re-licensed. This would be a huge fix for the whole thing… and the more I think about it the more I think it’s essential for this to work.
- The auction mechanics are biased to the crawler… if there is a spread between artist price and crawler max price, then the crawler pays the lower price set by the artist. It should be the average.
- They will need to provide content authors with analytics about the pricing mechanics for the bids the crawlers are making.
- If this whole thing works, then products that optimize bid mechanics on behalf of authors will be a big growth industry.
- If Cloudflare are setting themselves up as the clearing mechanism for payments, that’s far too much power and profit for one company. It’s even worse than the Google monopoly. Somehow the payment mechanics need to be democratized.