Hacker Newsnew | past | comments | ask | show | jobs | submit | lhnz's commentslogin

Records and Tuples were scrapped, but as this is JavaScript, there is a user-land implementation available here: https://github.com/seanmorris/libtuple


Userland implementations are never as performant as native implementations. That's the whole point of trying to add immutability to the standard.


even when performance might not be an issue or an objective, there are other concerns about an user land implementation: lack of syntax is a bummer, and lack of support in the ecosystem is the other giant one - for example, can I use this as props for a React component?


Out of interest, why is it a nightmare to use?

I've always been worried about how overly clever the approach is, does it have problems?


If it weren't for JavaScript we'd still be downloading binaries and running them unsandboxed on our computers.


Do you have plans to make the data sources pluggable instead of being Kafka specific?


we absolutely do, the library itself is designed to be extensible. we are currently working on adding webhooks as one of our sources. are there are any specific connectors/sources you'd be interested in?


I have lots of HTTP endpoints that we poll with a cursor but actually the underlying data is very large (we work with snapshots of it) and updates very frequently and eventually we'll move to something else (e.g. interact directly with the underlying services with capnproto) so really it would just be useful to be able to define these sources ourselves. I'm working doing full-stack engineering at an HFT currently and we were thinking of using DataFusion to allow users to join, query and aggregate the data in realtime but I haven't attempted this yet (and to do so means integrating with what currently exists as I don't have time to rewrite all of the services).


Is this something you'd need to download and install ROMs to use?


No need to download if you've got physical copies. A hacked Wii (which is simple to set up nowadays) can easily dump your games to a usable legal ROM.


You could rip Wii games that you own the physical disk for.


It's bittersweet. It seems likely to me that the US government didn't really want an open trial due to the possibility of scrutiny and that indefinite detention without trial followed by setting the legal precedent that aiding and abetting legal whistleblowers is a criminal conspiracy was their goal.


I'm surprised that an executive or lawyer didn't realise the reputational damage adding these clauses would eventually cause the leadership team.

Were they really stupid enough to think that the amount of money being offered would bend some of the most principled people in the world?

Whoever allowed those clauses to be added and let them remain has done more damage to the public face of OpenAI than any aggravated ex-employee ever could.


If I can sit down at any table in my house and get a multi-monitor setup without needing to buy multiple 4K screens then it'll win me over. However, in practice, I do not think the hardware will be high enough quality for me to want this yet.


These aren't "the rules of capitalism" they are just maximally self-interested hubristic behaviours by companies with monopolies. Capitalism doesn't have rules; states have rules. Capitalism has market incentives and stakeholders.


Capitalism is just a system based around the belief that capital appreciation is the ultimate end goal, and the ends justify the means. Everything is fine so long as the GDP keeps going up.


In capitalism maximally self-interested hubristic behaviours are the rules.


This is probably correct but I'd prefer that family don't read the conversations I've had, as even if I'm not saying anything too private, it feels too intrusive (it'd be a bit like reading my inner thoughts).


It's interesting that you're so trusting of strangers knowing your inner thoughts (OpenAI) but not your family


How could I look my wife in the eye, or expect my kids to grow up happy, if they knew their dad doesn't know how to use a regex to detect emojis in a string?


I hope there is more going on behind those eyes than not being a regex expert


I don't want my family to know I spent 3 hours chatting about the Holy Roman Empire.


What would change if they knew?


Why the questions? It is no one else's business why they want that level of privacy. Is it your intent to convince them that they don't need privacy?


> Is it your intent to convince them that they don't need privacy?

Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.


> Quite the opposite actually. My intent is to shed light on the fact that sharing information with OpenAI is not private. And you should not do that with information that you wouldn't even share with people you trust.

I'm not OP, but I think you're missing the point.

Privacy and trust isn't really a 1D gradient, it's probably planar or even spatial if anything.

Personally I'd be more willing to trust OpenAI with certain conversations because the blowback if it leaves their control is different than if I have that same conversation with my best friend and it leaves my best friend's control. The same premise underlies how patients can choose who to disclose their own health matters to, or choose who their providers can disclose to.

Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.


> Same reason behind why someone may be willing to post a relationship situation to r/relationship_advice and yet not talk about the same thing with family and friends.

I ask that you consider the people who use Reddit and the people who run Reddit independently. The people who use Reddit are not in a position of power over someone who asks for advice. The people who run Reddit on the other hand, are in a position of power to be able to emotionally manipulate the person who asked for advice. They can show you emotionally manipulative posts to keep your attention for longer. They can promote your post among people who are likely to respond in ways that keep you coming back.

OpenAI has a similar position of power. That's why you shouldn't trust people at either of those companies with your private thoughts.


You're assuming power comes with an assumed guarantee of use. OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model (I'm speaking using security language, but I doubt most people go so far as to threat model these interactions; it's mostly intuitive at this point).

Your family has limited power in the grand scheme of things, but the likelihood that they may leverage what power you give them over you is much higher.

The IRS has vast power and is likely to use it against you, hence why tax fraud is usually a bad idea.

Hence "planar" rather than linear.


> OpenAI has vast amounts of power with the data they're collecting, but the likelihood of OpenAI using it against any individual is small enough that an individual could consider it to be outside their threat model

I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.

Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.


> I think your use of the word "individual" is a bit weird here. I absolutely find it likely that OpenAI is doing individualized manipulation against everyone who uses their systems. Maybe this would be more obvious if you replace OpenAI with something like Facebook or Youtube in your head.

> Just because they are using their power on many individuals doesn't mean that they are not using their power against you too.

Yeah but at this point you're identifying individual risks and grasping at straws to justify manipulating* everyone's threat model. You can use that as your own justification, but everyone manages their own personal tolerance for different categories of risks differently.

*Also, considering the published definition of manipulation is "to control or play upon by artful, unfair, or insidious means especially to one's own advantage," I think saying that "OpenAI is doing individualized manipulation against everyone who uses their systems" is an overreach that requires strong evidence. It's one thing if companies use dark UX patterns to encourage product spend, but I don't believe (from what I know) that OpenAI is at a point where they can intake the necessary data both from past prompt history and from other sites to do the personalized, individualized manipulation across future prompts and responses that you're suggesting they're likely doing.

Considering your latest comment, I'm not sure this discussion is receiving the good faith it deserves anymore. We can part ways, it's fine.


Too much discussion about the Holy Roman Empire over dinner? People talk to get things of their mind sometimes, not the infinite pursuit of conversation.


My point was not that they should talk about the Holy Roman Empire with their family, but that they shouldn't share information with strangers that they wouldn't share with their family.

If you don't want your family to know something, you shouldn't tell it to OpenAI either.


> If you don't want your family to know something, you shouldn't tell it to OpenAI either.

Yeah, I think this is an over reduction of personal privacy models, but can you tell me why you believe this?


The reason you wouldn't say something to someone is because you are afraid of the power that you give people along with that knowledge.

Your family is in a a position of power, which is why it can be scary to share information with them. People at OpenAI are also at a position of power, but people who use their services seem to forget that, since they're talking to them through a computer that automatically responds.


Converging threads here: https://news.ycombinator.com/item?id=38956734

tldr: power (or if you want, impact) is the linear dimension, likelihood adds a second dimension to the plane of trust.


In practice, likelyhood directly correlates with power. Perhaps there is causation (power corrupts?)


Some people are more responsible than others.

For example, one's spouse typically has a lot of power, but hopefully a low likelihood in practice.


I need data on that. I haven't seen that in practice.


This is silly. It's not like OpenAI is going to find your family's contact info, then personally contact them and show them what you've been talking about with ChatGPT. It's just like another post here comparing this to writing a post on /r/relationshipadvice with very personal relationship details: the family members are extremely unlikely to ever see that post, the post is under a pseudonym (and probably a throwaway too), and the likelihood that someone is going to figure out the identity of the poster and seek out their family members to show the post to them is astronomical.


They would know that it was neither holy, nor Roman, nor an empire. Discuss.


Is that truly interesting? OP does not have to care about what AI think of him. OP does notnhave to care about accidentally offending or hurting AI either. Open does nor have to care about whether AI finds him silly or whatever.

Normal humans care about all of those with their families.


AI is a tool controlled by people. In this case, people who are not OP.


So? That doesn't invalidate the point of the comment you replied to.

To give another example: The cashier at the supermarket knows when I'm buying condoms, but that doesn't mean I want to tell my parents.

And neither would I want to know as a parent, when or whether my kids order bondage gear on Amazon.

It's not just about my information going to other people, but also keeping certain information of other people from reaching me.


Fine then. You don't want to find out your family the love you have for Roman empire. But you are a programmer, yes? So make an app that's just a wrapper for ChatGPT API's you're paying for and distribute that to your family phones. They'll use your OpenAI API key and each will have their own little ChatGPT 4 to query to. Have fun.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: