> "A good writer doesn't just think, and then write down what they thought, as a sort of transcript. A good writer will almost always discover new things in the process of writing. And there is, as far as I know, no substitute for this kind of discovery." -Paul Graham
What is it about writing that makes it different than thinking?
writing is a tool to help us think. Since we have limited active memory to think with, the words act like storage which helps organise thoughts and cover more ground
I mean to be fair a lot of this is because men don't compliment each other.
I'm a lesbian and I've been out for 22 years and I still have to be careful giving men compliments because they think 'I like your shirt' means 'I want to suck your dick'.
Not really, the 'Girl Next Door' and 'Gamer Girl' have been sexualized so I'm just slotted into that category. Most of it is a function of what you look like and unfortunately for me my appearance just screams feminine and I tick a lot of physical boxes that are attractive to men.
My understanding is once you're like...55 or 60 you can start complimenting men under 40. Then men put you into 'nice grandma' instead of 'she wants to nail me'.
Yandex tends to store third-party code in their repository. For example you can find sources of Spark here. So some part of 40GB+ code isn't written by Yandex.
A cluster of 6 year old 24GB NVIDIA Teslas should do the trick...they run for about $100 apiece. Put 12 or so of them together and you have the VRAM for a GPT3 clone.
Amazon has them listed at $200, but still, that's only $2,400 for 12 of them.
Still, adds up once you get the hardware you'd need to NVlink 12 of them, and then on top of that, the price of power/perf you get probably isn't great compared to modern compute.
Wonder what your volume would have to be before getting a box with 8 A100's from Lambdalabs would be the better tradeoff.
If you have time to wait for results then sure, it could work in theory but in practice they are so slow and power inefficient (compared to newer nodes) that no one uses them for LLMs, that's why they cost ~200$ used on ebay.
I just checked ebay and they are shockingly cheap. I can't even get DDR3 memory for the price they're selling 24GB of GDDR5... with a GPU thrown in for free.
Why is this? Did some large cloud vendor just upgrade?
Are there any deals like this on AMD hardware? Not having to deal with proprietary binary drivers is worth a lot of money and reduced performance to me. A lot.
These are pretty old, and all the companies are upgrading. But no one is upgrading from AMD hardware - basically no companies care if they use proprietary drivers. They want a good price-to-performance ratio, so they use NVIDIA stuff.
> Inclusion programs often trigger an “us versus them” mindset.
> Although diversity and inclusion training is prevalent in corporate America, its impact is inconsistent. According to the evidence, sometimes the programs even have the opposite effect of what they intend.
> One 2016 study of 830 mandatory diversity training programs found that they often triggered a strong backlash against the ideas they promoted. “Trainers tell us that people often respond to compulsory courses with anger and resistance,” wrote sociologists Frank Dobbin and Alexandra Kalev in the Harvard Business Review, “and many participants actually report more animosity toward other groups afterward.”
Which contains gems like "exceptionally talented, mission driven, innovative and kind Asanas" and "once an Asana, always an Asana" — considering some of the Sanskrit usages are equivalent to "chair" or "seated posture".
"You were good chairs, but you have to go. As they alway say, once a place where you put your ass, always a place where you put your ass."
I'm trying to understand how big of a deal ChatGPT and the like is or isn't. Even people in AI don't seem to agree on whether it's going to change everything or it's overhyped and not a major advancement. Which is it? Or somewhere in the middle?
"if you think AI chatter has reached an annoying level right now you're in for something else. it's going to be the only thing on anybody's mind starting shortly... i'm finding it a bit hard to communicate the urgency and heaviness of what's going on" https://twitter.com/tszzl/status/1617317478987878400
"Personally I can give you a formal definition of intelligence and even a number of speculative sketches of how I think it could be implemented, but I will also tell you that strong AI is not within sight and that recent advances are not moving us in the direction of strong AI" https://twitter.com/fchollet/status/1617579095885500416
As a programmer, large language models let me solve an increasing range of problems that I couldn't have solved without them. So I think they are a very big deal.
Just one example: parsing structured data out of a big pile of poorly formatted PDF documents.
That used to be too difficult and expensive for me to tackle without a small army of data entry people to help do the work.
Today I can point Textract OCR at it and then use a language model to extract structured data.
(I haven't implemented this particular example just yet, but I'm looking for an opportunity to do so.)
I think that ChatGPT could be a big accelerator for creative activity, but I wouldn't trust any output that I've only partially verified. That limits it to human scale problems in its direct output, but there are many ways that human scale output like code snippets can be useful on computer scale data.
For simple things it's pretty safe. I tried pasting in HTML from the homepage of Hacker News and having it turn that into a list of JSON objects each with the title, submitter, number of upvotes and number of comments.
There's two classes of response: those that are factually "right" or "wrong" -- who was the sixteenth president of the U.S.? And those that are opinion/debatable: "How should I break up with my boyfriend?" People will focus on the facts, and those will be improved (viz: the melding of ChatGPT with Wolfram Alpha) but the opinion answers are going to be more readily acceptable (and harder to optimize?).
I have a project I'm working on where I need to turn a bunch of bank statements into spreadsheets, is that an example what you're talking about? If so, I am extremely interested and would love to know how to implement it.
"if you think AI chatter has reached an annoying level right now you're in for something else. it's going to be the only thing on anybody's mind starting shortly... i'm finding it a bit hard to communicate the urgency and heaviness of what's going on"
"Personally I can give you a formal definition of intelligence and even a number of speculative sketches of how I think it could be implemented, but I will also tell you that strong AI is not within sight and that recent advances are not moving us in the direction of strong AI"
Tools that assist in knowledge work can be very useful and very impactful on the knowledge industries without needing to even define "strong AI", let alone "being smart", "knowing ideas", et al.
What's important is not what we call these things or how we talk about what they do in an abstract manner rather if these tools are useful.
There's a general lesson here: Don't waste your time talking yourself in philosophical circles about what "AI" means, or what "knowing" means, or what "ideas" mean. If you'd like further convincing please see Philosophical Investigations by Wittgenstein. If pressed I'm just going to do a cheap imitation so you might as well get it from the source.
Personally, I have found ChatGPT, Stable Diffusion, Copilot, and a number of other large language models and tools built on large language models to be very useful.
I fed a folk song I was working on into ChatGPT and asked it to conjure up some similar evocative scenery... I then generated animations with Stable Diffusion based on a linear interpolation between those descriptions of that evocative scenery in the latent space... Then I wrote wrote some more lyrics about the weird stuff Stable Diffusion was hallucinating... and in the end I had a bunch of visual images, a song, and some animations, all that seem worthy of publishing. The visual components would otherwise never have been made as I don't have the time or money to do much more than sit around with my acoustic guitar and write songs.
There's a difference between "worthy of publishing" and "ready for publishing"... the animations are in flip book form right, but here's a big dump to Dropbox:
The images are in png format and have the prompt and model used in the info section of the file, which you can see if you open the "info" panel on Dropbox:
distant desert highway
C F
I'm on a distant desert highway
C G
I'm just trying to find my way home
C F
But this world is all empty and I'm alone
C G C
Spare the reptiles, and the truckers, and the neon homes
F C
And the way that you looked at me last night
F C
Is the way that I stare at the sun at that great light
G C
I had a dream you came to me in another land
G C
Where the forests grew over all where we used to stand
F G C
Over the highways, over the deserts, over our hands
I'm on an untimely troubled Tuesday
I don't need to know all the beasts that roam
But this world is all empty and I'm alone
Spare the cones, and pot holes, and the buffalo
It's not letting me look at the links, I think they're formatted for you if you're logged in but don't allow someone else to view the files. You may have to share the files from within dropbox to get a public link.
What is in sight is AI that can (sometimes) fool us into thinking it is "strong". And whether it is or not becomes a point for philosophers to debate.
We're heading toward a situation captured fairly well in "The Good Place" -- is Janet actually alive? Is killing her wrong? She would tell you no, as long as you're not approaching the button that actually reboots her :-)
> What is in sight is AI that can (sometimes) fool us into thinking it is "strong".
Or maybe you've fooled yourself into thinking that "strong AI"/general intelligence is actually not just a bunch of tricks that mash together into a smorgasbord that usually works quite well. That includes human intelligence.
I have definitely not fooled myself into thinking that "strong AI" is not a bunch of tricks that mash together into a smorgasbord. That's exactly what I think :-)
Including human intelligence, as you say. I'm reasonably convinced that "intelligence" is an emergent higher-order abstraction that arises from deterministic processes, at the cellular, or digital, level.
I would, however, make a distinction between resilient and brittle intelligence. ChatGPT, for example, seems very human-like for some percentage of responses, but when it goes off the rails, it goes completely wrong. Humans can sometimes be like that, but (I don't think) to the same degree, and certainly they are often capable of a softer landing when they get into unfamiliar territory.
What it's unequivocally done is shown us that while we thought bigger models would lead to incremental improvements we have had a giant leap with this X10 bigger model.
I.e. were not good at predicting how far away stronger AI is. Even if we can comment on strong/weak AI.
I have been saying lately "if your job can be replaced by AI, it should". I know that sounds crass, but I'm a person who is always trying to replace my tasks with small shell scripts leaving only the real thinking to be the stuff that I do.
I remember a long time ago Joel from Joel on Software made a post/comment (I can't find it, so it must have been a comment) about how the hard part of programming is interpreting the spec into code, and that programmers often get mad because the "spec isn't complete". Of course the spec isn't complete, if it were complete, it would cover every edge case and be as complex as the code. If you could have AI turn a spec into code, then the real intelligence goes into writing the spec to cover the complexity.
It's ok if AI can generate boiler plate for functions or even regurgitate leet code solutions (heck, I gave it the typical "coding challenge" we use here as our first filter and it did a good job). The real intelligence in development is to know what algorithm you want to use for your current problem.
So to me ChatGPT isn't that big of a deal. It does some neat things, and it may make some jobs redundant, but I don't see how it could replace the real value that humans bring to a problem. It may keep replacing the bottom tier workers, but that just frees people up to bring value higher up the value chain.
> that just frees people up to bring value higher up the value chain.
I think at some point this will stop being true. Obviously people displaced from jobs in the industrial revolution found other work, but at some point we can't expect the bottom 10% of a population (in terms of ability to perform non-menial jobs) to do "higher value" work.
I don't say this in any disparaging way and I am not in any way demeaning them or suggesting they don't matter. But they don't necessarily fit into society's expectations for "higher value" work.
We can definitely expect the bottom 10% in your example to do "higher value" work, once their currently meager work is automated. For once they can become supervisors of robots doing the same work they do today, since they have experience and can provide guidance to the robots. Also, they can transition to work that benefits from the human interaction standpoint, such as elderly care. They will need some training, but not at the level that they can't absorb. As the population age, healthcare workers will be the single most growth segment, that robots cannot completely replace. Last but not least, many of the same "bottom 10%" are capable of getting higher education and get truly "higher value" jobs.
Bottom line is, AI and automation can cause short term pain, but the upside is enormous. Successful countries will be those that can manage that transition.
The bottom 10% are usually those with disabilities like autism. Many of them really aren't capable of getting higher education. I have a 21 year old son with autism that has never even learned the multiplication tables (as a single example) despite our spending tens of thousands of dollars on private tutoring and other training outside of his schooling. He works a low level job at the airport that he enjoys. But, realistically, it is at the upper limits of his abilities. He is far far away from being a lone case. I really don't think many of the bottom 10% can be trained into higher value jobs.
> Also, they can transition to work that benefits from the human interaction standpoint, such as elderly care.
This sounds great in theory, and I fully agree, but the system of compensation and wealth distribution will need to be turned upside down for that to happen.
The value created by current and future automation will somehow need to be captured and distributed to people whose financially profitable jobs are going to be replaced with unprofitable, but socially-beneficial work, there's really no way around it. Those people will still need to be fed, housed and have access to at least basic luxuries.
Furthermore, what do we do in a future "ideal" world where robots and AI are capable of providing basic necessities to sustain every human alive? Capitalism would likely break down one way or another, with branching paths that either take us to an utopia or a nightmare dystopia.
> It may keep replacing the bottom tier workers, but that just frees people up to bring value higher up the value chain.
My concern is for people like my son who has autism. He has a job at the airport slinging bags into and out of planes. It is a job he enjoys because he likes transportation (trains, planes, etc.) Realistically, it is a job that is at the very top of the limit of his abilities. If an AI robot took over his bottom tier job, he would not be freed up to bring value higher up the value chain. He would probably become homeless if we were not around to support him.
> programmers often get mad because the "spec isn't complete"
> Of course the spec isn't complete, if it were complete, it would cover every edge case and be as complex as the code.
Fair, but triggering nonetheless because I've definitely seen this argument used as an excuse for why there was basically no spec aside from "an iPhone app used to rate beer" and that there is some kind of value in such a contribution that couldn't otherwise be found by pulling in random strangers from the street and asking them about cool things they wished that computers could do.
"If you could have AI turn a spec into code, then the real intelligence goes into writing the spec to cover the complexity."
If that's the case, then the spec is code. And this isn't the first time something like that has happened: people who write assembly might say to someone writing Python, "How can you call that code when you don't even know which register is storing a value?"
IMO it's analogous to adaptive cruise control on cars. It's very useful in certain specific circumstances, and genuinely helpful. But not yet necessary or transformative unless you live in LA/sit in traffic for a living.
People are claiming to use it to write code, I'm curious how sophisticated said code is. Sure it might take out of some of the grunt work, like an ultra-sophisticated find-and-replace, but you still have to review all the changes it makes and correct any mistakes so the only thing it really saves is the typing. There's no way you could ask it to write code for a sophisticated architecture without extensive training on said architecture, and I'm not sure how you would even train it for that (can it parse design documents? Diagrams?)
The "Look, AI is replacing creative work first, the thing we thought was most immune to AI!" narrative annoys me. IMO it's just yet another factor revealing how little people value "creative" work. There's a reason relatively simplistic Marvel movies make the big bucks and the erudite starving author/artist is a meme. The market does not appreciate creativity for its own sake, and never has. No one cares about the reincarnation of William Shakespeare if he' s using all his talent to write blogspam. No one cares about Monet's ghost's DeviantArt anime titty drawings. People care about creativity because it's a requirement to produce something new and useful, that use can be pragmatic or symbolic, but if it's neither no one cares and the AI-generated equivalent is good enough.
You want to see AI-proof creativity (at least currently available AI)? Look at any luxury automobile interior. Look at any sophisticated software/hardware architecture. Look at an aircraft carrier or any other item where there aren't millions of samples to train on. That's not to say AI couldn't contribute to the tools that make these things, but no one working on the above is losing their job to AI any time soon. It's just taking out some of the low-hanging creative fruit. Maybe it's the start of an all-consuming revolution, or maybe this is as far as it goes. Only time will tell, but I've seen enough false revolutions (self-driving cars, AI advertising, crypto) to not buy in until I see hard data of it doing something more than writing convincing youtube intros and being used to cheat on high school essays. If the tools turn out to produce genuine value, then I'll learn how to use them to maximum effect at my job. Nothing to get wound up about either way.
Copilot is perfectly capable of understanding my extremely complex spaghetti code and give accurate suggestions based on the context spanning many files. It truly feels like you're doing pair programming with another human being.
tl;dr: in some instances I had to coach it pretty directly to get what I wanted. In other instances it was flawless. And to your point, some of the code it generated was both clearer and faster than the code I would have written for the same task.
Sure but this kinda proves my point. It can potentially generate good code for simple, atomic problems. It can't write me a REST service that hooks into an existing web backed spread over multiple repos of proprietary code.
Any relationship not visible in the code seems to be outside its capabilities to understand for the moment. I'll be impressed when I can point it at a server cluster and the associated dozen repos, give it some clues, and it can understand how the code for a server cluster interacts with said cluster's configuration and database hookups by simply scanning the files/repos and the info I textually provide.
The problem for us devs is not that it will replace you completely (although that day is coming, but it's at least a decade away). The problem is that as developer productivity increases dramatically this may put pressure on developer job positions, salaries, etc. One hope is that as the price of developing software plummets it will increase demand accordingly so that we won't feel it that much. God knows there is still a ton of areas where software or better software would help out a lot but it is cost prohibitive at the moment. Once we get into robotics basically anything that humans do can be improved with software.
More like I'll invest time learning a tool relative to the potential payoff. Right now this tool would be of minimal utility. The vast majority of code I write isn't "make this isolated algorithm more efficient", it's "implement/integrate this new feature into the server cluster". Without deep understanding of the software/server architecture and the ability to derive potential tradeoffs of different approaches, my job cannot be done.
This "if you wait to see results the opportunity will be gone!" mentality is for VCs and other people who's business models require them to be way out on the risk curve, who make a lot of bad calls, but lose relatively little when they fail. It was also partially a product of low interest rates. It is not applicable to most individuals/organizations.
I use it as an instant StackOverflow for the most part to get around new libraries or libraries/languages I don't use that often. Also generating custom bash one liners or small scripts. It is priceless for this use case. Yeah, sometimes it is wrong, but in my exp. less than 5%. Also we are lucky that for our purposes we can almost always validate the answer almost instantly and without incurring any cost.
I think it's a big deal for some things, but for others is not. I think it's a good tool, but as of yet can't replace a human. The writing is pretty mediocre.
I know people that use it to write slide deck copy, which I think is nice because no one really cares about that and it saves some time. I do think it's a nice tool to get a general idea of something or even can be used as a writing prompt.
For some coding answers as of now, it's actually just easier and faster to use Google/Bing and find an answer on StackOverflow.
The biggest advantage is not in finding the answer. It's in being able to ask questions about that answer and getting immediate and pretty accurate responses.
Your twit-xamples compare two different things. ChatGPT can pass exams [0] and IQ tests [1], but that doesn't mean it's a "strong AI", just that it's good at talking -- or, rather, creating text output that humans can ascribe thoughtfulness to, because, hey, it's getting the questions right! As right as someone with an IQ of 83 can, anyway. Whether it's "actually" thinking is another question entirely -- but does that matter?! If it -- or its soon-to-be-created, ever-more-complex descendants -- is good enough at aping a human, it'll probably replace a bunch of human jobs. There probably aren't enough "prompt engineer" positions for all the displaced humans to fill.
There's your "change everything" -- it might not be a "strong AI", but if people can argue that Searle's Chinese Room [2] is "actually" talking, and it says useful, monetizable things, then it's close enough to be disruptive.
After dealing with a lot of contractors for my house, I feel like close supervision with knowledge of the problem domain is also required of many humans.
If it's an assistant that you constantly have to double check their work, then they would get fired ASAP.
That's the problem with Copilot. If I'm not familiar with the API calls then I'm still going to have to dig into the docs. I could run it and it might "work", but that doesn't guarantee that it's correct. There are a whole lot of things in the C API that work but are not correct, such as the gets() function. If I have to do such legwork then it's just as easy to write the code myself.
The utility of AI is proportional to the trust one has in it. Trust is easy to lose and hard to regain. It will take just a few AI mishaps to ruin a product or even an entire industry.
> If it's an assistant that you constantly have to double check their work, then they would get fired ASAP.
It's an assistant that creates drafts for you. You still need to check them, but usually reading is a lot easier and less time-consuming than writing.
I used it a couple of times to compose some long replies email, and it was just fantastic. I had to fix some minor stuff, but I complete the task in less than 5 minutes while without ChatGPT it would take me about 30 minutes.
> That's the problem with Copilot. If I'm not familiar with the API calls then I'm still going to have to dig into the docs. I could run it and it might "work", but that doesn't guarantee that it's correct.
You have to run/compile it anyway and if you combine it with existing tools (linters, type checkers and so on) you will detect this kind of anomalies very soon.
Honestly, I don't see any value in it, I already have the chat prompt and just type inside it the question instead of googling and be redirected to “gptoverlow”.
Maybe can be useful if actual humans validate/fix ChatGPT response, but in this case probably better to put effort to fix response directly in ChatGPT.
Not sure you understand. That is what I was going for.
1. share an interesting prompt
2. Have someone validate the response if you're not comfortable with what ChatGPT gave you.
Even if ChatGPT (and LLMs in general) is not the way to go for AGI, there is plenty of evidence so far that it will prove to be a powerful tool. Much like airplanes did little in the way of achieving bird-like flight, but is obviously a useful invention, in many ways superior to bird-like flight.
I've been using it more and more as a Stackoverflow/Google replacement when working. Sure, it's not always correct, but there's a lot of garbage answers on SO/Google too. But I'm finding it's better at getting me in the right ballpark, and that saves me time.
> Even people in AI don’t seem to agree on whether it’s going to change everything or it’s overhyped and not a major advancement. Which is it?
My feeling is that it (well, LLMs in general, ChatGPT is just the particular one getting public attention) is an overhyped major advancement that is going to change almost everything, but not as much as it is being hyped as changing everything.
> Even people in AI don't seem to agree on whether it's going to change everything or it's overhyped and not a major advancement. Which is it? Or somewhere in the middle?
It's both IMO. It will change everything but is also overhyped at this point.
To you yes. Now go out in the real world in which most people don't work in an office and mostly use internet for entertainment
Crypto seemed very useful to many people, and they still do, you'll find thousand upon thousands of comments and this very website preaching cryptos as the next game changer
That was the crass money-making part of it, but a lot of nerds earnestly believe(d) that cryptocurrencies would have massive world-changing consequences. I think it's increasingly obvious that there's little rational or empirical basis for such belief.
With AI, the biggest claims I see are overwhelmingly from people who are not doing cutting-edge work in the field, who have no real foundation for a belief that these AIs will continue to improve at a dramatic rate. Because to really change the world, they do need to get a lot better.
My personal excitement about language models is based on what they can do today.
I'm a big believer in the "capability overhang" idea, which is that the existing language models still have a huge array of capabilities that we haven't discovered yet.
That theory seems to be proved correct on a constant basis. Even the classic "let's think about this step by step" paper came out less than a year ago: https://arxiv.org/abs/2205.11916 - May 2022.
This paper (https://arxiv.org/abs/2206.07682) also touches on a pretty fascinating phenomenon - that when scaling up large language models they seem to "naturally" obtain new emergent abilities that do not exist on smaller models.
> Now go out in the real world in which most people don't work in an office and mostly use internet for entertainment
I wanted to refute your point by giving some YouTube videos about practical uses and their views. Then I checked YouTube's trending videos and compared the view counter to that of a PewDiePie video of 2 days ago and now I agree. You are right.
But honestly you sound just like someone in 1996 going "Oh the internet isn't going to change anything and is just a fad", and here we are decades later and the internet has changed almost everything in our lives. Every person you know uses the internet every day on their cellphones in one way or another.
Except good luck explaining bitcoin to someone non technical, but sit that same person down in front of Midjourney and have them prompt up some images and they'll have a great time.
I use chatgpt to write code. If it gets a bit better and can connected to an editor to create files, update existing files developer productivity will go through the roof
I have a website in development aidev.codes that automatically saves the output from OpenAI queries to files you specify (hosts them in a web page if it's that type of file). GitHub integration is one of many things I have planned. Today fixing some bugs with the knowledgebase stuff I just added yesterday and also putting in a template system. It has an !!edit command.
Also if you use vim you can try my npm package `askleo` `npm i -g askleo` (not tied to the website but requires your own OpenAI API key) with `:r ! askleo Go function to reverse a list of numbers` or whatever .
What sorts of things are you using it (successfully) for? I've gotten it to write a script or two for me, but it feels like usually I have to type out so much context (or domain/business knowledge) that it doesn't end up saving much effort.
I use it as an instant StackOverflow for the most part to get around new libraries or libraries/languages I don't use that often. Also generating custom bash one liners or small scripts. It is priceless for this use case. Yeah, sometimes it is wrong, but in my exp. less than 5%. Also we are lucky that for our purposes we can almost always validate the answer almost instantly and without incurring any cost.
Because you are a developer. Replace yourself with a product manager, and maybe you will start seeing things differently.
I think in the long run LLMs will enable the real-deal NoCode solutions. You would probably need to write an essay to get it right, but you will just need to know a human language understood by the LLM and the business domain.
Thanks! That really had nothing to do with what I asked tho... This bit of thread is literally about using it to write code, so saying "pretend you're a PM" is not really relevant at all
What I meant is that your (our) job as a developer (in a mid-term future, I would say 10-12 years) will be made largely irrelevant, because a PM will be able to program by explaining what they wants to a LLM. That's why you are not seeing any real benefit from it now.
Right that's great. I get that. It's gonna put me out of a job in 10-12 years. I'll worry about that later.
The person I responded to was saying that they (as a dev I believe) have been seeing huge productivity gains _right now_ and that's what I'm interested in.
I use it to generate whole test suites from function definitions. Not one test, but dozens, covering various inputs and edge cases. TDD purists will balk at that, but it saves serious amounts of typing out boilerplate. You have to recheck its suggestions, of course.
I couldn't get Copilot to spew anything like that (a single simple test at best, and it fails at that more frequently than it produces something useful).
It's also quite good at converting relatively simple programs or configs between languages. For example, I used it to convert PostgreSQL DDL queries into Hibernate models (and also in reverse), JS snippets into OCaml, XML into YAML, maven pom.xml into gradle build scripts, and a few more.
Different use case I think. With chatGPT you write something like:
write a function using the python requests library which makes a get request to the URL example.com, parses the JSON response and returns the value of the "foo" field. Throw an exception if the get request fails, the response is not JSON or is invalid JSON, or if the foo field is not present in the response.
I just tried this and got a correct (and reasonable) function on the first try.
This kind of high level description to low-level implementation is a huge timesaver but it saves time in a different way than copilot.
ChatGPT made me rethink my assessment of how close we are to general AI. I was of the opinion that we need some major architectural innovations to actually get to general AI and while I think we still need some huge discoveries, I think ChatGPT was one of those milestones.
ChatGPT sounds really human, but really I think it's more of a pulling back of the curtain of what many people do a lot of the time: regurgitate ideas. People are capable of more intelligence, but a large part of human work, time, even creativity comes from taking everything you've sucked into your brain during your life and dumping out something else. ChatGPT is amazingly good at this part of mimicking a human.
Has everyone else been using a different tool than me? ChatGPT is interesting, but is laughably bad at any non-trivial instruction.
It's fantastic at generating content that on first glance looks remarkably right, but always fails fine inspection.
All I've seen from LLMs is much better demos of the types of funny things markov chains were generating two decades ago (anyone remember the various academic paper generators?). However I have yet to see anything that stands out as really remarkable.
My read is that people want to see incredible progress towards strong AI, and LLMs do a great job of letting people feel like they're seeing that if they want to.
I suspect in 5 years we'll largely have forgotten about LLMs and in 10 they'll come back into popularity because techniques will become more efficient and computing power will increase enough that people can train them on their home PCs... but they'll still just be a fun novelty.
Really? I found it to be extremely capable at very difficult tasks. For example, I had it guide me through using the ODBC SQL drivers (C++). It's also extremely good at generating fictional stories. Unlike other AI solutions, it has a lot of context. For example, it generated one story that mostly used generic names like "the king" or "evil wizard", but I was able to get it to fill in those names in a conversational way, not by modifying the original prompt like you'd need to do with plain GPT-3.
What is it about writing that makes it different than thinking?