Hacker Newsnew | past | comments | ask | show | jobs | submit | sbergot's commentslogin

and for a while the status was "there might be issues on azure portal".


There might have been, but they didn't know because they couldn't access it. Could have been something totally unrelated.


now there is an information about "Azure Portal Access Issues". No word about front door being down.


official status pages are useless most of the time.


I work for a cloud provider which is serious about transparency. Our customers know they are going to get the straight story from our status page.

When you find an honest vendor, cherish them. They are rare, and they work hard to earn and keep your confidence.


There is no way to measure awareness. We can only know we are aware ourselves. For all we know trees or rocks might have awareness. Or I could be the only being aware of itself in the universe. We have no way to prove anything about it. Therefore it is not a useful descriptor of intelligence (be it human, animal or artificial).


> We can only know we are aware ourselves.

There are people that have a hard time recognizing/feeling/understanding other people as "aware". Even more about animals.


Agreed. Everything that looks like intelligence to ME is intelligent.

My measurement of outside intelligence is limited by my intelligence. So I can understand when something is stupider compared to me. For example, industrial machine vs human worker, human worker is infinitely more intelligent compared to machine, because this human worker can do all kinds of interesting stuff. this metaphorical "human worker" did everything around from laying a brick to launching a man to the Moon.

....

Imagine Super-future, where humanity created nanobots and they ate everything around. And now instead of Earth there is just a cloud of them.

These nanonobots were clever and could adapt, and they had all the knowledge that humans had and even more(as they were eating earth a swarm was running global science experiments to understand as much as possible before the energy ends).

Once they ate the last bite of our Earth(an important note here: they left an optimal amount of matter to keep running experiments. Humans were kept in a controlled state and were studied to increase Swarm's intelligence), they launched next stage. A project, grand architect named "Optimise Energy capture from the Sun".

Nanobots re-created the most efficient ways of capturing the Sun energy - ancient plants, which swarm studied for centuries. Swarm added some upgrades on top of what nature came up with, but it was still built on top of what nature figured by itself. A perfect plant to capture the Sun's energy. All of them a perfect copy of itself + adaptive movements based on their geolocation and time(which makes all of them unique).

For plants nanobots needed water, so they created efficient oceans to feed the plants. They added clouds and rains as transport mechanism between oceans and plants... etc etc.

One night the human, which you already know by the name "Ivan the Liberator"(back then everyone called him just Ivan), didn't sleep on his usual hour. Suddenly all the lights went off and he saw a spark on the horizon. Horizon, that was strongly prohibited to approach. He took his rifle, jumped on a truck and raced to the shore - closest point to the spark vector.

Once he approached - there was no horizon or water. A wall of dark glass-like material, edges barely noticeable. Just 30 cm wide. On the left and on the right from a 30 cm wide wall - an image as real as his hands - of a water and sky. At the top of the wall - a hole. He used his gun to hit the wall with the light - and it wasn't very thick, but once he hit - it regenerated very quickly. But once he hit a black wall - it shattered and he saw a different world - world of plants.

He stepped into the forest, but these plants, were behaving differently. This part of the swarm wasn't supposed to face the human, so these nanobots never saw one and didn't have optimised instructions on what to do in that case. They started reporting new values back to the main computer and performing default behaviour until the updated software arrived from an intelligence center of the Swarm.

A human was observing a strange thing - plants were smoothly flowing around him to keep a safe distance, like water steps away from your hands in a pond.

"That's different" thought Ivan, extended his hand in a friendly gesture and said - Nice to meet you. I'm Ivan.

....

In this story a human sees a forest with plants and has no clue that it is a swarm of intelligence far greater than him. To him it looks repetitive simple action that doesn't look random -> let's test how intelligent outside entity is -> If entity wants to show its intelligence - it answers to communication -> If entity wants to hide its intelligence - it pretends to be not intelligent.

If Swarm decides to show you that it is intelligent - it can show you that it is intelligent up to your level. It won't be able to explain everything that it knows or understands to you, because you will be limited by your hardware. The limit for the Swarm is only computation power it can get.


This is factully wrong. You are confusing react with nextjs. React produce html documents, and the first usecase was reactdom, a pure client rendering library.


No. I'm speaking from experience having used React from its very first release. You're not only factually wrong about me being wrong, your knowledge is obviously quite limited.

React has always initially rendering components on the server, then hydrated on the client. You don't have to take my word for it, download the initial release and try it out yourself: https://github.com/facebook/react/releases/tag/v0.4.0

Go ahead, run that and write a simple app. Even better, want to do full SSR with no client lifecycle? Write an app that uses "React.renderComponentToString" and then describe what's happening for me.

Even .NET had solutions around using React just for SSR: https://reactjs.net/features/server-side-rendering.html

Claiming that react itself could not just render out a component server-side and spit out the HTML to the client requires several critical misconceptions about how software in general works lol.


I didn't know that, thanks for the pointer, but ReactJS did start as a client side only framework.

React's history starts around 2010 or so, it was integrated into Facebook in 2011 and Instagram 2012 before being open sourced in 2013. And it was that first open source release that added renderComponentToString: long after React had been deployed on some of the world's largest websites.

Also, renderComponentToString isn't SSR. That term usually implies client side hydration so event handlers work, as React is all about state management. But in that release there is no mention of hydration. You could of course do it by hand by rendering to HTML server side, then replacing the entire DOM with a new client side rendered version yourself if you don't mind wiping user's state in some cases, which of course people do! For SSR to work properly took longer.

And that makes sense. Facebook don't use JS on the server side, at least not in that era, their web servers were all PHP/hack. So what would have done the SSR?


Just as a fun fact, React’s jsx is actually a port of XHP which was initially developed for PHP in 2009 I believe. This would explain why react’s components were class based at first (because php leans heavily towards class based OO programming).


Aren't you a treasure, thank the gods you're here to condescend us plebs.


Claiming something is objectively wrong while stating something that can be refuted with 10 seconds of Googling means you're massaging your own ego at the expense of everyone else's time.

Grow up.


You're still condescending. It's unbecoming.


I work on the SSO stack in a b2b company with about 200k monthly active users. One blind spot in our monitoring is when an error occurs on the client's identity provider because of a problem on our side. The service is unusable and we don't have any error logs to raise an alert. We tried to setup an alert based on expected vs actual traffic but we concluded that it would create more problems for the reason you provided.


Oauth defines bearers tokens without requesting them to be jwt.


interestingly other people are answering the opposite in this thread.


They're wrong.

From ECMA-404[1] in section 6:

> The JSON syntax does not impose any restrictions on the strings used as names, does not require that name strings be unique, and does not assign any significance to the ordering of name/value pairs.

That IS unambiguous.

And for more justification:

> Meaningful data interchange requires agreement between a producer and consumer on the semantics attached to a particular use of the JSON syntax. What JSON does provide is the syntactic framework to which such semantics can be attached

> JSON is agnostic about the semantics of numbers. In any programming language, there can be a variety of number types of various capacities and complements, fixed or floating, binary or decimal.

> It is expected that other standards will refer to this one, strictly adhering to the JSON syntax, while imposing semantics interpretation and restrictions on various encoding details. Such standards may require specific behaviours. JSON itself specifies no behaviour.

It all makes sense when you understand JSON is just a specification for a grammar, not for behaviours.

[1]: https://ecma-international.org/wp-content/uploads/ECMA-404_2...


> and does not assign any significance to the ordering of name/value pairs.

I think this is outdated? I believe that the order is preserved when parsing into a JavaScript Object. (Yes, Objects have a well-defined key order. Please don't actually rely on this...)


In the JS spec, you'd be looking for 25.5.1

If I'm not mistaken, this is the primary point:

> Valid JSON text is a subset of the ECMAScript PrimaryExpression syntax. Step 2 verifies that jsonString conforms to that subset, and step 10 asserts that that parsing and evaluation returns a value of an appropriate type.

And in the algorithm

    c. Else,
      i. Let keys be ? EnumerableOwnProperties(val, KEY).
      ii. For each String P of keys, do
        1. Let newElement be ? InternalizeJSONProperty(val, P, reviver).
        2. If newElement is undefined, then
          a. Perform ? val.[[Delete]](P).
        3. Else,
          a. Perform ? CreateDataProperty(val, P, newElement).
If you theoretically (not practically) parse a JSON file into a normal JS AST then loop over it this way, because JS preserves key order, it seems like this would also wind up preserving key order. And because it would add those keys to the final JS object in that same order, the order would be preserved in the output.

> (Yes, Object's have a well-defined key order. Please don't actually rely on this...)

JS added this in 2009 (ES5) because browsers already did it and loads of code depended on it (accidentally or not).

There is theoretically a performance hit to using ordered hashtables. That doesn't seem like such a big deal with hidden classes except that `{a:1, b:2}` is a different inline cache entry than `{b:2, a:1}` which makes it easier to accidentally make your function polymorphic.

In any case, you are paying for it, you might as well use it if (IMO) it makes things easier. For example, `let copy = {...obj, updatedKey: 123}` is relying on the insertion order of `obj` to keep the same hidden class.


In JS maybe (I don't know tbh), but that's irrelevant to the JSON spec. Other implementations could make a different decision.


Ah, I thought the quote was from the JS spec. I didn't realize that ECMA published their own copy of the JSON spec.


Except it is biased in its conclusion:

> However, unlike our OOP example, existing code that uses the Logger type and log function cannot work with this new type. There needs to be some refactoring, and how the user code will need to be refactored depends on how we want to expose this new type to the users.

It is super simple to create a Logger from a FileLogger an pass that to old code. In OOP you also need to refactor code when you are changing base types, and you need to think about what to expose to client code.

To me option 1 is the correct simple approach, but the author dissmisses it for unclear reasons.


I am not an expert but is the dark matter theory testable?


Of course it is. It already passed many tests (for example gravitational lensing) while some dark matter candidates (WIMPS, primordial black holes) have effectively been ruled out through tests.


Dark matter is not a theory, per se. There are many, many theories that attempt to explain dark matter. Some of them have yet to produce testable hypotheses, others have already been tested.


Thank you. Dark matter is the issue in cosmology that "it appears as though undetectable matter is present in the universe causing X, Y, Z phenomenon."

The issue that I have with people calling dark matter a theory is that they think it requires matter to solve. It doesn't. MOND is a dark matter theory. It explains(in part) why it appears as though undetectable matter is present in galaxies causing disc velocity to not match expectations.


It's not, but it's accepted as it is the theory that best fits the observations. It has holes, but not as much as others. It will continue to be the accepted model until another one is an even better fit to the data or we can prove/disprove the existence of dark matter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: