Hacker Newsnew | past | comments | ask | show | jobs | submit | crosen99's commentslogin

I got off a quick GPT trying to explore the following theme, and I think the results are promising:

In the ever-evolving landscape of creative expression, a new horizon emerges: AI agents. Imagine authors and publishers releasing these agents as they do books or video games, creating interactive narratives and personalized adventures. This is more than just storytelling; it’s a revolution in how we experience and engage with content. AI agents could soon become our guides, companions, and co-conspirators in worlds crafted by human creativity and AI innovation. Are we ready to embark on this new journey? The next chapter in storytelling is being written, and it’s one where the reader becomes a part of the tale.


Well said, and fully agree. If you horse race two approaches, you can of course arbitrarily arrive at a winner based solely on the version of each approach you choose. You need a deeper look if you want to generalize.


They stumbled into a position where they can make a crap ton of money going up the stack, which can fund the ongoing march toward AGI. (The revenue not only is cash in their pocket, but it’s also driving up their evaluation for future investment.)


> we've heard that when people were using chains/agents they often wanted to see what exactly was going on inside, or change it in someway.

I certainly agree, but I'm having trouble seeing how templates help with this. The templates appear to be a consolidation of examples like those that were already emphasized in the current documentation. This is nice to have, but what does it do to elucidate the inner workings?


The biggest challenge I’m trying to track isn’t on the list: online learning. The difficulties with getting LLMs to absorb new knowledge without catastrophic forgetting is a key factor making us so reliant on techniques like retrieval augmented generation. While RAG is very powerful, it’s only as good as the information retrieval step and context size, which quite often aren’t good enough.


You might wonder if these harbingers are actually tuned into an alternate and consistent set of values not shared by the majority, rather than simply irrational consumers. In that same vein, you might wonder if the product managers of these failed products are part of that same cohort.


Same here. I think it's because you typically conquer a language by developing a fairly complete mental model of that language's behavior, but with CSS there are just too many nooks and crannies to get your had around. While it's uncomfortable for me, I find that I do best with CSS when I take a more practical approach and just try to use best practices to accomplish what I want without having the deep understanding that I crave.

I've found that Kevin Powell [1] is a great resource for this approach, and he also helps with the model model aspect as a bonus.

[1] https://www.kevinpowell.co/


I keep going back to LangChain thinking it just hasn't found its legs yet, but every time I do I retreat exasperated. I don't find their abstractions useful or intuitive, and their documentation is woefully scattered and incomplete. Things are moving so quickly with LLMs that theirs is no easy task, but so far they haven't really cracked the nut of making LLM app development easier.


Strong disagree. The way I see it, any successful customer service operation needs to have three things in place:

1. Properly defined and articulated customer service policies and procedures

2. A training program that adequately prepares agents to know, understand and apply those policies and procedures

3. Sufficient inherent capability and disposition of the customer service agents to apply that training

My take is that the (relevant) inherent capabilities of LLMs at even at this early stage are as strong or stronger than what I see in the human customer service agents I typically deal with. And the disposition of an LLM can be arbitrarily good through fine-tuning and prompt engineering.

As such, any company that gets 1. and 2. right has a high chance of providing a better support experience by replacing human agents with LLMs.


> AIs, at their current level of development, don’t perceive objects in the way that we do – they understand commonly occurring patterns.

You see this claim everywhere - that AI operates on statistics and patterns and not actual understanding. But human understanding is entirely about statistics and patterns. When a human sees a collection of particles and recognizes it as, say, a car, all they are doing is recognizing the car-like patterns in how the particles are organized that have a strong statistical correlation with prior observations of things classified as a car. Am I missing something?


After a decade of billions invested in systems with models trained for object recognition in traffic, these models still struggle with object permanence. Why should we expect some rando painter models to do better? They'll paint a lamppost through a car and not notice anything wrong with it.

The day where these models can show shoes under a fence and a reflection of the person behind the fence in an opposing shop window? That day will come. But not on the current crop.


We're not (initially anyway) trained on photography and literature and reddit, our first experience of a banana is probably eating one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: