Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would ask chatgpt to refactor his text rather than waiting him to do so... I think poor english as an excuse will be a thing of past.


Instead of saying “the singer sounds so autotuned” we’ll soon be whining

“the text reads too autogpt”

(I propose a pronunciation of “auto-jipped”).


> (I propose a pronunciation of “auto-jipped”).

Probably want to pick another term, since that's a homonym of a racial slur.


Sigh. I had to actually go look this up. Sometimes it's just easier not to talk, because all of the skeletons of old normalized words come to light. I truly was not trying to slur anyone, and apologize if my (unlucky) attempt at humor resonated sour for anyone.


I propose adding some Unicode sign like the double turnstile [1], ⊨ (ALT+8875), which can signify "is a semantic consequence of" or "is stronger than", after a polysemic word to imply that no racial/socially derogatory connotation is to be derived from the word in that phrase, for example, "the text reads too auto-jipped⊨" or "the master⊨ branch is fubar" in a Git context.

Or perhaps an even better sign would be the negated double turnstile, ⊭, "to denote the statement 'does not entail'" [1], making it more explicit. Hence an example would look like "the text reads too auto-jipped(⊭racism)", which can be read "the text reads too auto-jipped and this word, auto-jipped, does not entail racism in this context". Ok, done, racism solved, your move David Guetta [2].

[1] https://en.wikipedia.org/wiki/Double_turnstile

[2] https://www.youtube.com/watch?v=dEI7oX0XxJw


That would be abused faster than you can type "gamer".


I would rather poor English than confidently completely factually incorrect.


Why would it be factually incorrect? You tell it to proof-read, not add stuff. It works great.


ChatGPT does not know how to proof read. It is not an artificial inteligence. It will take your input and determine the most likely response matching the data it was trained with.

Just like how ChatGPT fails at simple math - ChatGPT doesn't know math https://ai.stackexchange.com/questions/38220/why-is-chatgpt-...


I’ve learned over the last few months that there are certain trigger words that instigate basically pointless discussion around e.g. “what it means to learn”.

I can now add “read” to that list. Let’s play word taboo! The rules are we can’t talk about GPT using anthropomorphic terminology.

Does GPT predict less than useful mathematical computations? Yes, and not just less than useful but basically useless.

Does GPT predict less than useful language translations, ranging from English-French, to summaries, in-the-style-of, etc? No, it’s actually quite useful as when confined to only the information contained in a prompt it doesn’t have to hallucinate an answer.

It is not useful to anthropomorphize the functionality of these tools in either a practical or legal context.

And everyone pick up a copy of Philosophical Investigations by Wittgenstein so y’all can learn about how to avoid snake-eating-tail discourse.


Of course you're more than welcome to use whatever tech for whatever purpose.

Personally, I'm not confident that ChatGPT wouldn't hallucinate facts when prompted to 'just' proof-read.

I would rather poor English than confident factual errors.


For example, here's a section from the Build Your Own Redis book:

Our server will be able to process multiple requests from a client, to do that we need to implement some sort of “protocol”, at least to split requests apart from the TCP byte stream. The easiest way to split requests apart is by declaring how long the request is at the beginning of the request. Let’s use the following scheme.

The protocol consists of 2 parts: a 4-byte little-endian integer indicating the length of the following request, and a variable length request.

Starts from the code from the last chapter, the loop of the server is modified to handle multiple requests:

GPT suggested this instead:

Our server will process multiple requests from a client by implementing a protocol to separate requests from the TCP byte stream. The simplest method for separating requests is to include the length of each request at the start. The protocol consists of two parts: a 4-byte little-endian integer indicating the length of the request and a variable-length request. The server code from the previous chapter has been modified to handle multiple requests in the following manner:

There are no hallucinated facts because the most probable continuation of the given prompts is one that can gather all required information from the original text itself.

It's sort of like the difference between the truthfulness of analytic and synthetic claims. An analytic claim would be like "It is raining and you're outside, naked, and unsheltered so therefor water is falling on your skin from the sky." A synthetic claim would be like "It is raining outside".

Synthetic claims are said to be contingent on facts outside the text itself. These are the cases where GPT is completely useless.

The error rate for analytic claims is much lower although anyone who is writing anything should do a lot of review before publishing. Think of it like you asked your assistant to write something. You're gonna wanna read it over before you slap your name on it.

I mean, I actually don't care if you use these tools or not but your explanation of how it works will guide other readers in the wrong direction so I feel the need to correct the narrative you've presented.


What prompt did you use? and how big chunks of text would you say it can handle reliably.


Me: Hey, I need some help cleaning up some language with a lot of grammatical errors, can you help?

GPT: Yes, I'd be happy to help you clean up language with grammatical errors. Please provide the text for me to review.

Me: Our server will be able to process multiple requests from a client, to do that we need to implement some sort of “protocol”, at least to split requests apart from the TCP byte stream. The easiest way to split requests apart is by declaring how long the request is at the beginning of the request. Let’s use the following scheme.

The protocol consists of 2 parts: a 4-byte little-endian integer indicating the length of the following request, and a variable length request.

Starts from the code from the last chapter, the loop of the server is modified to handle multiple requests:

GPT: "Our server will process multiple requests from a client by implementing a protocol to separate requests from the TCP byte stream. The simplest method for separating requests is to include the length of each request at the start. The protocol consists of two parts: a 4-byte little-endian integer indicating the length of the request and a variable-length request. The server code from the previous chapter has been modified to handle multiple requests in the following manner:"

---

That's the entirety of the interaction!

I haven't tested much but for the last day or so I've been thinking a lot about Kant, Frege, Quine and Wittgenstein!

GPT opens the door for some kind of empirical philosophy... like, what are the error rates for various kinds of tasks? Can we use a Kantian framework? How about Frege? How about Quine?

I mean, Quine is actually my favorite of the analytic philosophers because of his indeterminacy of translation argument and the notion that there really is no analytic/synthetic divide when you get down to it resonates well with me.

Death to metaphysics!

But there seems to be some use in differentiating between "All bachelors are unmarried" and "All bachelors are unhappy" if only because I'm now seeing how making a distinction can have a profound impact on the usefulness of GPT completions.

Briefly and half-assed, Quine's argument is that because you would have to be familiar with language and culture in order to understand "All bachelors are married" that the meanings of those words are fact-like and outside the scope of the proposition.

If GPT is able to do some Frege-like substitution of synonyms it is because it has this compressed language model which seems to lend credence to Quine's arguments.


Thanks,

I find the quality of answers you receive out of GPT drastically changes with the way you phrase questions.

I don't think I would ever have come up with asking the question in the way you did.

As someone who has english as a second language I found GPT ofthen produced incorrect and low quality answers while most of my native english speaking colleagues were getting high quality answers. Looking at their prompts compared to mine it's all down to differences in how questions are phrased.


Still works great to ask it to proofread so whatever point you're trying to make is irrelevant.


When I see poorly written prose, it makes me distrust the technology imparted. If you can't take time to get the grammar right, why should I believe that you got the code right?


There are many signals you can use to help you judge the value of a written piece of text. In this case, it reads to me much more as the writing of someone for whom English is not their native language than of someone who doesn't know what they're talking about technically.

If you want to pre-judge all technical content coming from people who have not spent huge portions of their life living in an English speaking country as being of no value, then I'm sure that will protect you from some bad content, but you're going to be missing out on an awful lot of good stuff too.


That's like complaining book writer site have shitty JS.

"If you can't write code why should I believe that the prose you write is any better?"

That being said they are selling it and that's enough reason to complain.


Not everyone speaks English as their native language.


True, but looking at author name - James Smith - it does not seem to be the case.


Potentially, but on the other hand, James Smith is exactly the sort of name you'd choose to use in the Anglosphere if your birth name was difficult to pronounce for English speakers.


It's possible to be good at coding yet poor at grammar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: