Hacker Newsnew | past | comments | ask | show | jobs | submit | MillionOClock's commentslogin

I very recently (~ 1 week ago) subscribed to the Pro plan and was indeed surprised by how fast I reached my quota compared to say Codex with similar subscription tier. The UX is generally really cool with Claude Code, which left me with a bit of a bittersweet feeling of not even being able to truly explore all the possibilities since after just making basic planning and code changes I am already out of quota for experimenting with various ways of using subagents, testing background stuff etc.

I use opencode with codex after all the shenanigans from anthropic recently. You might want to give that a shot!

Use cliproxyapi and use any model in CC. I use Codex models in CC and it's the best of both worlds!

I remember a couple of weeks ago when people raved about Claude Code I got a feeling like there's no way this is sustainable, they must be using tokens like crazy if used as described. Guess Anthropic did the math as well and now we're here.

The best thing about the max plan has been that I don’t have “range anxiety” with my workflows. This opens me to trying random things on a whim and explore the outer limits of the LLM capabilities more.

Maybe I am missing something but the last few times I tested VMs it seemed to end up never shrinking in RAM size once it had grown, is this a real issue and if so is there any improvement coming on that front?

You're missing the complexity of making the guest inform the host that it has fully freed this and that slab of memory and that the host may reclaim it until further notice. It's a bit more complicated than the other way around, where the guest believes it has e.g. 4 GiB of RAM available but the host doesn't allocate all of it for the guest until it tries to read/write there. A virtual machine is something entirely else than a containerized piece of software.

> Maybe I am missing something but the last few times I tested VMs ...

Tested VMs on what? For VMs are used daily and there are, what, hundreds of millions of VMs running as we speak? Billions?


I’ll use this post as an occasion to ask a question I can’t find an answer to anywhere as someone developing a closed-source commercial agentic AI builder app: does either GitHub Copilot or OpenAI codex teams plan to also expand this quota usage support to all tools provided they respect certain rules, even if non open-source and/or commercial? Up to now I was planning to just enable people to input their own API key but this kind of integration would be amazing for me. Would really love some clarification about this if anyone from those teams reads this, or if anyone knows who to potentially contact to find an answer.

Am I understanding it correctly, based on these tweets [1][2], that both Codex and Copilot teams or at least team members mentioned potentially letting people make use of their quotas in third party tools?

I really would like further clarification on those points as I would be pretty interested for a product I'm building if it was indeed made possible.

[1] https://x.com/jaredpalmer/status/2009844004221833625

[2] https://x.com/thsottiaux/status/2009714843587342393


Wow, exactly the same issue for me, and for two different accounts of mine!


I also feel like an heavily multimodal model could be very nice for this: allow multiple images from various angles, optionally some true depth data even if imperfect (like what a basic phone LIDAR would output), why not even photos of the same place even if it comes from other sources at other times (just to gather more data), and based on that generate a 3D scene you can explore, using generative AI for filling with plausible content what is missing.


It should be noted that OpenAI now has a specific compaction API which returns opaque encrypted items. This is AFAICT different from deciding when to compact, and many open source tools should indeed be inspectable to that regard.


It's likely to either be an approach like this [0] or something even less involved.

0: https://github.com/apple/ml-clara


What are your favorite features? I recently downloaded it and also use Codex CLI and GitHub Copilot in VS Code but I don't really know what specific features it has others might not have.


The UI is better - they box the specific types of actions the orchestrator agent takes with a clear categorization. The standard quality of life shortcuts like type a number to respond to an MCQ are present here as well. They use specialized sub agents such as one with big context window to find context in the codebase. The quotas appear to be much more generous vs CC. The agent memory management between compacting cycles seems to have a few tricks CC is missing. Also, with 3.0 Flash, it feels faster with the same level of agency and intelligence. It has a feature to focus into an interactive shell where bash commands are being executed by the orchestrator agent. Doesn't feel like Google is trying to push you to buy more credits or is relying on this product for its financial survival - I suspect CC has some dark patterns around this where the agents runs cycles of token in circles with minimal progress on bugs before you have to top up your wallet. Early days still.


It’s unclear to me wether that would give some access to a token quota or if it would just be like any other « Sign in with … ». In all cases I am currently developing an app that would greatly benefit from letting my users connect to their ChatGPT account and use some token quota.


The issue I see is that for certain apps, such as one I am currently working on and hope to publish soon on iOS, is that they do require a lot of maintenance once published even if there were no server costs. Given the amount of work I already put in it and how much more will be necessary even just to keep the app correctly running in the future, I don't really see what other monetization approach would make sense for me. Actually, I would even argue that selling an app without a subscription might (sometimes) be setting wrong or blurry expectations: if a user accepts to pay today a single time, how long are they expecting updates for? Will it only be basic bug fixes or also major new features? With a subscription, I feel like at least if they are unhappy with my app, they won't really have lost anything and can just unsubscribe, since they had basically accepted, IMO, that the money they put in my app each period of time is only for the service and potential updates in that small period of time and not future changes.


This used to be handled by selling full-version upgrades and providing patches between versions for free.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: