Hacker Newsnew | past | comments | ask | show | jobs | submit | pizzathyme's commentslogin

Congrats to the Brex team and the YC partners that supported them!

It seems unbelievable that this is the first time the child ever picked up a paintbrush and applied paint to a surface.

It's probably more like: this is the first "published" final painting he ever did, after doing hundreds of other practice paintings/sketches that don't "count"


My anxiety about falling behind with AI plummeted after I realized many of these tweets are overblown in this way. I use AI every day, how is everyone getting more spectacular results than me? Turns out: they exaggerate.

Here are several real stories I dug into:

"My brick-and-mortar business wouldn't even exist without AI" --> meant they used Claude to help them search for lawyers in their local area and summarize permits they needed

"I'm now doing the work of 10 product managers" --> actually meant they create draft PRD's. Did not mention firing 10 PMs

"I launched an entire product line this weekend" --> meant they created a website with a sign up, and it shows them a single javascript page, no customers

"I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF


Getting viral on X is the current replacement for selling courses for {daytrading,amazon FBA,crypto}.

The content of the tweets isn't the thing.. bull-posting or invoking Cunningham's Law is. X is the destination for formula posting and some of those blue checkmarks are getting "reach" rev share kickbacks.


Same with Linkedin. Ive seen a lot of posts telling u to comment something to get a secret guide on how to do Y.

If it was successful, they wouldnt be telling everyone about it


Yeah, if you get enough impressions, you get some revenue, so you don't need to sell any courses, just viral content. Which is why some (not ALL) exaggerate as suggested.

It's a bit insane how much reach you need before you'd earn anything impactful, though.

I average 1-2M impressions/month, and have some video clips on X/Twitter that have gotten 100K+ views, and average earnings of around $42/month (over the past year).

I imagine you'd need hundreds of millions of impressions/views on Twitter to earn a living with their current rates.


Thanks a lot for your transparency Jeff! Much needed in this area. And your content is quality, much unlike what else being discussed here.

It is really hard to actually make anything substantial on social media exposure. Unfortunately this does not stop many from exaggerating claims in order to (maybe become) be internet famous, or seeing high number of clicks etc. So it is both bad business for creators, and poisoning the discourse for readers - the only real winners are the social media companies and the product companies that get hyped up.


> Unfortunately this does not stop many from exaggerating claims in order to (maybe become) be internet famous

I've been thinking about this a lot lately in another context -- vira priests being anti-vax and realized it's the other way around: their motivation doesn't matter, but the viewers don't want to see moderate content, they want to see highly polarized and controversial topics.

The same with the claims about AI. Nobody wants to hear AI boosts productivity in nuanced way, people either want to hear about 10X or -10X so the market dictates the content/meme.


I'm not as familiar with your content but how often do you post? I have a friend who posts 'meme' type of content (all original) and he makes a decent amount, but he has it all queued up.

The worst is Reddit these days.

I pretty much never even went there for technical topics at all, just funny memes and such, but one day recently I started seeing crazy AI hype stories getting posted, and sadly I made a huge mistake and I clicked on one once, and now it’s all I get.

Endless posts from subs like r/agi, r/singularity, as well as the various product specific subs (for Claude, OpenAI, etc). These aren’t even links to external articles, these are supposedly personal accounts of someone being blown away by what the latest release of this or that model or tool can do. Every single one of these posts boils down to some irritating “game over for software engineers” hype fest, sometimes with skeptical comments calling out the clearly AI-generated text and overblown claims, sometimes not. Usually comments pointing out flaws in whatever’s being hyped are just dismissed with a hand wave about how the flaw may have been true at one time, but the latest and greatest version has no such flaws and is truly miraculous, even if it’s just a minor update for that week. It’s always the same pattern.

There’s clearly a lot of astroturfing going on.


> There’s clearly a lot of astroturfing going on.

Yeah I think so too. I even see it here on HN

I'm just tuning it all out. The big test is just installing the damn thing and seeing what it can do. There's 0 barrier to trying it


Lo and behold, here’s a concrete example I stumbled across just a few seconds after opening Reddit again (really gotta stop doing that):

https://www.reddit.com/r/codex/s/Y52yB6Fg3A


Completely in the same boat as you, the constant bombardment on Reddit is getting really detrimental to my wellbeing at this point lol

>There’s clearly a lot of astroturfing going on.

Reddit is like 90% astroturfing, trolls, and bots.


I actually read through the logs and the code in the rare instances someone actually posts their prompts and the generated output. If I'm being overly cynical about the tech, I want to know.

The last one I did it on was breathlessly touted as "I used [LLM] to do some advanced digital forensics!"

Dawg. The LLM grepped for a single keyword you gave it and then faffed about putting it into json several times before throwing it away and generating some markdown instead. When you told it the result was bad, it grepped for a second word and did the process again.

It looks impressive with all these json files and bash scripts flying by, but what it actually did was turn a single word grep into blog post markdown and you still had to help it.

Some of you have never been on enterprise software sales calls and it shows.


> Some of you have never been on enterprise software sales calls and it shows.

Hah—I'm struggling to decide whether everyone experiencing it would be a good thing in terms of inoculating people's minds, or a terrible thing in terms of what it says about a society where it happens.


“I used AI to make a super profitable stock trading bot” —-> using fake money with historical data

“I used AI to make an entire NES emulator in an afternoon!” —-> a project that has been done hundreds of times and posted all over github with plenty of references


> “I used AI to make a super profitable stock trading bot” —-> using fake money with historical data

Stocks are another matter. There were wonder "algorithms" even before "AI". I helped some friends tweak some. They had the enthusiasm and I had the programming expertise and I was curious.

That was a couple years ago. None of them is rich and retired now - which was what the test runs were showing - and I think most aren't even trading any more.


I vibe coded a few ideas i had in my mind for a while. My basic stack is html, single page, local storage and lightweight js.

It is really good in doing this.

those ideas are like UI experiments or small tools helping me doing stuff.

Its also super great in ELI5'ing anything


Same result if you copied and pasted from a couple passionate blogs.

Not in the same timeframe. My experiments take an hour.

I feel the same. I understand some of the excitement. Whenn I use it I feel more productive as it seems I get more code done. But I never finish anything earlier because it never fails to introduce a bizarre bug or behaviour that no one in sane made doing the task would

> "I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF

There was a story years ago about someone who made hundreds of novels on Amazon, in aggregate they pulled in a decent penny. I wonder if someone's doing the same but with ChatGPT instead.


Pretty sure there was a whole era where people were doing this with public domain works, as well as works generated by Markov chains spitting out barely-plausible-at-first-glance spaghetti. I think that well started to dry up before LLMs even hit the scene.

"AI helped me make money by evading anti-spam controls" doesn't have quite the same ring to it. :p

"Adding the abbreviation 'AI' to my marketing for online courses for making millions making marketing for online courses made me millions!"

It had happened in Japan. There was on author who were updating 30+ series simultaneously on Kakuyomi, the largest Japanese web novel site. A few of them got top ranked.

Afaik, I think the way people are making money in this space is selling courses that teach you how to sell mass produced AI slop on Amazon, rather than actually doing it

People say outrageous things when they’re follower farming.

At the end of the day, it doesn't really get you that much if you get 70% of the way there on your initial prompt (which you probably spent some time discussing, thinking through, clarifying requirements on). Paid, deliverable work is expected to involve validation, accountability, security, reliability, etc.

Taking that 70% solution and adding these things is harder than if a human got you 70% there, because the mistakes LLMs make are designed to look right, while being wrong in ways a sane human would never be. This makes their mistakes easy to overlook, requiring more careful line-by-line review in any domain where people are paying you. They also duplicate code and are super verbose, so they produce a ton tech debt -> more tokens for future agents to clog their contexts with.

I like using them, they have real value when used correctly, but I'm skeptical that this value is going to translate to massive real business value in the next few years, especially when you weigh that with the risk and tech debt that comes along with it.


> and are super verbose...

Since I don't code for money any more, my main daily LLM use is for some web searches, especially those where multiple semantic meanings would be difficult specify with a traditional search or even compound logical operators. It's good for this but the answers tend to be too verbose and in ways no reasonably competent human would be. There's a weird mismatch between the raw capability and the need to explicitly prompt "in one sentence" when it would be contextually obvious to a human.


Imo getting 70% of the way is very valuable for quickly creating throwaway prototypes, exploring approaches and learning new stuff.

However getting the AI to build production quality code is sometimes quite frustrating, and requires a very hands-on approach.


Yep - no doubt that LLMs are useful. I use them every day, for lots of stuff. It's a lot better than Google search was in its prime. Will it translate to massively increased output for the typical engineer esp. senior/staff+)? I don't think it will without a radical change to the architecture. But that is an opinion.

I completely agree, I found it very funny that I have been transitioning from an "LLM sceptic" to a "LLM advocate", without changing my viewpoint. I have long said that LLM's won't be replacing swathes of the workforce any time soon and that LLM's are of course useful for specific tasks, especially prototyping and drafting.

I have gone from being challenged on the first point, to the second. The hype is not what it has been.


"I used AI to write a GPU-only MoE forward and backward pass to supplement the manual implementation in PyTorch that only supported a few specific GPUs" -> https://github.com/lostmsu/grouped_mm_bf16 100% vibe coded.

Pretty much every x non-political/celeb account with 5K followers+ is a paid influencer shill lol.

Welcome to the internet


One of my favorite stories from the dotcom bust is when people, after the bust, said something along the lines of: "Take Pets.com. Who the hell would buy 40lb dogfood bags over the internet? And what business would offer that?? It doesn't make sense at all economically! No wonder they went out of business."

Yet here we are, 20 years later, routinely ordering FURNITURE on the internent and often delivered "free".

My point being, sure, there is a lot of hype around AI but that doesn't mean that there aren't nuggets of very useful projects happening.


True, but I think the point of that story is it’s really hard to predict what’s crap and what’s just too early.

It doesn’t guarantee the skeptics are wrong all the time.


"Look at what happened with the internet!" also doesn't mean the same will happen with AI

Neither argument works


There's 0 requirement that [new technology] must follow the path of the internet though. So it's kind of an irrelevant non sequitur.

Pets.com was both selling everything at a loss and spending millions on advertising. It wasn't the concept that was the issue.

I would encourage people to test this out for themselves, I think you will find a different result. People today are starved for in-person connection, but are afraid to initiate the conversation.

This doesn't come naturally to me, but after working on it over a few years, 95% of the time strangers are excited to chat and say hi and make a friend.


You mentioned working on it — do you have a particular strategy, venue, or opening line/guiding ethos that you find works well?

I love making friends with strangers, but usually rely on the "handshake protocol" of a casual observation or small talk that is then accepted (with a similar slight-deepening or extension of the thought) or rejected (casual assent or no response at all), until the bandwidth opens and I can foster a more meaningful moment of connection with a pivot like "Oh awesome that you do $THING for work. Do you enjoy what you do?" or "Oh I don't know much about $LOCATION_YOURE_FROM. Good spot for a vacation, or good spot to drive straight through?"

As somewhere between "thinks like an engineer" and "on the spectrum," I really enjoy hearing others' strategies or optimizations (optimizing for quality, connection, warmth) for social situations.


I found out that everybody has at least one subject that they are super passionate and knowledgeable about, and that I can learn at least this one thing from any human being. So instead of pushing the conversation into my areas of expertise, I find it more fun for everybody to let people steer it to what they really care about. This way we both get a sense of connection, it takes the weight of my shoulder to have to perform or amuse people, I get to learn random interesting things, and on top of that people think I am an amazing conversational partner, even though its them who do most of the talking (lol). Sometime people go full autistic on you and give you a massive ear beating but then you always have the option of saying "hey, it's been great talking to you, but I gotta run for a $thing. see you around!"

FWIW I think you're already doing the thing. That's it. But I'd suggest trying not to care too much about optimisation. It's unnecessary in my view because it implicit puts goals & outcomes as the end, when it's, ore about meandering and seeing where things go, endless possibilities.


> "Oh I don't know much about $LOCATION_YOURE_FROM."

I always love the most to chat with strangers in line or wherever when I'm in a foreign country, as there's so much good dirt for digging with someone from a far away place. It's funny, though, the number of times I strike up a conversation with someone halfway around the world only to find out they live within a few miles of me. Last time I was in London, for example, the lady in line in front of me had an Australian accent, and I always enjoy talking to Aussies. Yep, she was an Aussie... Who lives a few towns over from me in the US, in the same apartment complex my wife lived in when I met her.


I'd echo this.

There does feel like some wide resignation (more so with younger people <35 if I can generalise a bit) that we're too far gone everyone being closed off. But I've generally found that there is no real resolve to that resignation. Many just do not want to, or feel comfortable, making the start. Once the start is done though, the pleasantness of the experience is generally visible.


Exactly this. I don't do it much in the USA to be honest, but when traveling.


Speaking as someone who worked for the SF bay area's largest homeless shelter nonprofit:

People who end up homeless long-term usually have negative social behaviors that push others away. When you help them, they don't tell an interesting story, they act angry or yell at you. When you give them money, they don't make you feel you happy, they make you feel afraid or annoyed.

This is unfortunately often due to mental health issues or drug problems. It's very sad, and ends up completely isolating them from all friends, family, and strangers who could help them.

Edit: This article actually puts this into clear terms, long term homeless people are poor "kindees"


Ideally this friction should be viewed as a normal part of career growth. You will have expanded your expertise and are now capable of harder problems and roles, with more compensation in return.

The typical moves are: [1] Negotiate for more title, compensation at your current role (good outcome) [2] Leave for a better role (a good outcome) [3] Stay, no change, doing more work for the same money (not recommended)


The very first example, which is held up as an error, is actually arguably correct. If you asked a human (me) how many bananas were purchased, they clearly purchased one banana.

Yes the banana weighs 0.4 pounds. But the question was not to return the weight or the quantity, the question was to return the quantity.

It seems like more instructions are needed in the prompt that the author is not even aware of.


A very common peeled banana weight is 100g (“metric banana”). This is convenient for calorie counting. 0.4lbs for a single banana as the peeled weight is probably around 125g.

https://www.reddit.com/r/dataisbeautiful/comments/bs741l/oc_...


Or one batch of bananas, weighing 0.4 pounds. The number of bananas is not specified in the receipt, and I would not expect the model to estimate it.


The prompt literally tells it "if not specified assume 1"


it is very unlikely for 0.4 lbs of bananas to be more than one.

https://fdc.nal.usda.gov/food-details/1105314/measures

Sugar bananas or apple bananas would weigh less, but would cost more and probably not just be listed as bananas.


That would require the model to know or look up the average weight of a banana, and do arithmetic.


Yes and - it would be great to hover/tap to see the original headline.

I found myself pulling up the original and the honest versions side by side. The translation makes it funny.


Right now there is a lot of drift between real and honest version, so it's hard to find the original title.


If you click on the comments in the honest version, it'll redirect you to the real version.


Still it would be great to be able to see on hover


They did mean you, they just meant "imagine" very literally!


Same question and great work. I would love to know the prompt details of how the hacker news truth was captured


Yes, this is absolutely brilliant! Teach us the prompt, o great wizards!


To answer my question myself I gave Microsoft copilot this prompt:

    I want you to rewrite this headline "Amazon will allow ePub and PDF downloads for DRM-free eBooks" 
    into something a little humorous and snarky that reveals the underlying truth that would bring a 
    wry smile to tech-engaged but big tech-skeptical hacker news readers.
    
    This has to fit in the 80 character limit for Hacker News so keep it appropriately short.
      
    Also I want you to reply with exactly one headline and not anything else so I can use your output 
    as part of a processing pipeline
and i get the response

    Amazon Finally Remembers eBooks Aren’t Supposed to Be Prisoners
which I think is great. I started with the first paragraph and got something too long with some explanation. I added the second, and got three replies and more explanation. The three replies were all "good enough" in my mind but added the third paragraph to control the output.


I prompted Gemini to tell me how to prompt itself to get similar results on other news sites and it said I should give it a description of the intended audience and what it finds funny/snarky.

Which looks like what you did.


Now go deeper! Prompt Gemini to write a prompt for itself that would write a prompt for itself that would get similar results.


Inception 2.0


Can I do it in an infinite loop and bring all the data centers down?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: