Hacker Newsnew | past | comments | ask | show | jobs | submit | KaoruAoiShiho's commentslogin

I've been using Cherry Studio, works great.

Sometimes it is just bias but the 2.5 pro had benchmarks showing the degradation (plus they changed the name every time so it was obviously a different ckpt or model).


Can you give a specific example of something that can't be expressed with a flowchart but can be with rowboat.


In theory you could express most things as a flowchart but the complexity of doing that quickly escalates. A customer support bot that goes beyond informational answers might be a good example for something that is hard to express in a flowchart (without exploding complexity), but can be built in Rowboat.

Here is some personal experience: we previously built Coinbase's automated chatbot and we used a flowchart type builder to do that. This was a intent-entity based system that used deep learning models. It started great, but pretty quickly it became a nightmare to manage. To account for the fact that users could ask things out of turn or move across topics every other turn - we added in concepts called jumps - where control could go from one path to another unrelated path of workflow in on hop - which again introduced a lot of maintenance complexity.

The way we see it is that, when we assign a task to another human or a teammate we don't give them a flowchart - we just give them high level instructions. Maybe that should be the standard for building systems with LLMs?

Is this making sense?


Is the high level instruction compiled to a flowchart under the hood? If so maybe a conversational interface is another layer on a flowchart and not an alternative? Overall it makes sense that flowcharts are limiting when they get big, yes. Product looks cool congrats on the launch.


Thanks!

No, the instructions are not compiled into a flowchart under the hood. We use OpenAI’s agent SDK and use handoffs as a mechanism to transfer control between agents.

There are 3 types of agents in Rowboat: 1. Conversational agents are ones which can talk to the user. They can call tools and can choose to handoff control to another agent if needed. 2. Task agents can’t talk to users but can otherwise call tools and do things in a loop - they are internal agents. 3. Pipeline agent is a sequence of task agents (here the transfer of control is deterministic).

For instance, if we build a system for airline customer support, there might be a set of conversational agents each for different high level topics like ticketing, baggage etc. and internally they can use task and pipeline agents as needed.

Does this make sense?


In my testing this model is quite bad and far behind 235b a22b. https://fiction.live/stories/Fiction-liveBench-Sept-12-2025/...


They should give money that can be used on anything instead of specifically for healthcare. That way you can choose to take care of your kids yourself and put that money towards food than having to work and then outsource childcare.


> MongoDB Atlas

It took a while but eventually opensource dies.


Uh, people have wasted entire lifetimes chasing wild goose. Newton and Einstein both spent the latter halves of their lives :( despite being geniuses.


I think the primary difference would be they didn't waste billions of dollars in their research.


Isaac Newton dedicated over thirty years to the study and practice of alchemy, writing over one million words on the subject. Comparable in scale to his writings on mathematics and physics.

I'd rather GDP be $1B smaller right now if it meant that Newton had spent another 30 years on physics and math.


If you actually read the claude article it says the same things as the cognition article, it just has a different definition of multi-agent.


It sucks.


Lol downvoted, come on anyone who has used gemini and claude code knows there's no comparison... gimme a break.


You're getting down voted because of the curt "it sucks" which shows a level of shallowness in your understanding.

Nothing in the world is simply outright garbage. Even the seemingly worst products exist for a reason and is used for a variety of use cases.

So, take a step back and reevaluate whether your reply could have been better. Because, it simply "just sucks"


can you detail the differences you see that substantiate your judgement?


It would hurt nvidia not benefit, that's why nvidia spends a lot of effort to prevent that from happening, and it's not the case currently.

They really need to avoid the situation in the console market, where the fact there's only 3 customers means almost no margins on console chips.


Prior to the A.I. boom, Nvidia had a much, much more diverse customer base in terms of revenue mix. According to their 2015 annual report[1], their revenues were spread across the following revenue segments: gaming, automotive, enterprise, HPC and cloud, and PC and mobile OEMs. Gaming was the largest segment and contributed less than 50% of revenues. At this time, with a diverse customer base, their gross margins were 55.5%. (This is a fantastic gross margin in any industry outside software).

In 2025 (fiscal year), Nvidia only reported two revenue segments: compute and networking ($116B revenue) and graphics ($14.3B revenue). Within the compute and networking segment, three customers represented 34% of all revenue. Nvidia's gross margins for fiscal 2025 were 75% [2].

In other words, this hypothesis doesn't fit at all. In this case, having more concentration in extremely deep pocketed customers competing over a constrained supply of product has caused margins to sky rocket. Moreover, GP's claim of monopsony doesn't make any sense. Nvidia is not at any risk of having a single buyer, and with the recent news that sales to China will be allowed, the customer base is going to become more diverse, creating even more demand for their products.

[1] https://s201.q4cdn.com/141608511/files/doc_financials/annual...

[2] https://s201.q4cdn.com/141608511/files/doc_financials/2025/a...


I'm not sure your analysis is apples to apples.

Prior to the AI boom, the quality of GPUs slightly favored NVidia but AMD was a viable alternative. Also, there are scale differences between 2025 and before the AI boom -- simply put, there was more competition in the market for a smaller bucket and favorable winds on supplier production costs. Further, they just have better software tooling through CUDA.

Since 2022 and the rise of multi-billion parameter models, NVidia's CUDA has had a lock on the business side, but face rising costs due to terrible trade policy by the US, significant rebound from COVID as well as geopolitical realignments, inflation on the workforce, and rushed/buggy power supplies as their default supply options have made their position quite untenable -- mostly CUDA is their saving grace. If AMD got their druthers about them and focused they'd potentially unseat NVidia. But until ROCm is at least _easy_ nothing will happen there.


I merely comment on the concentration of customers and how it has not at all hurt Nvidia's margins. In fact, they have expanded quite dramatically. All of your other points are moot.

> "rising costs"

Nvidia's margin expansion would suggest otherwise. Or at least, the costs are not scaling with volume/pricing. Again, all we need to do is look at the margins.

> "their position quite untenable ... But until ROCm is at least _easy_ nothing will happen there"

Seems like you're contradicting yourself. Not sure what point you're trying to make. Bottom line is, there is absolutely no concern about monopsony as suggested by the GP. Revenue is booming and margins are expanding. Will it last? Who knows. Fat margins tend to invite competition.


Nobody said this was the case...

The only example I used was the console market which has been ruined because of this issue. They generally left that market because it was that horrible.


The console market is low margin because they seem to find someone ready to take low margin (e.g. AMD). Nvidia was in console market before but left it due to low margin. Nvidia only sells old low development chip with probably good margin to Nintendo. The chips in the Switch 2 are using node from 2020 and are super cheap in manufacturing and Nvidia had low efforts in developing them.

AMD however has to design new special APUs for Xbox and PS. Why do they do that? They could just decide to step away from the tender but they won't because they seem to be desperate for any business. Jensen was like that 20 years ago but he has learned that some business you simply step away from.


This whole subthread is about the claim that Nvidia is at risk of a monopsony situation. I pointed out that while revenue has concentrated on a few customers post-AI boom, margins have improved, suggesting Nvidia is nowhere near and not veering toward that risk. Revenue is exploding, as are margins.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: