Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

4.5 can extremely quickly distill and work with what I at least consider, complex nuanced thought. 4.5 is night and day better than every other AI for my work, it's quite clever and I like it.

Very quick mvp comparison for the show me what you mean crew: https://chatgpt.com/share/67c48fcc-db24-800f-865b-c0485efd7f... & https://chatgpt.com/share/67c48fe2-0830-800f-a370-7a18586e8b... (~30 seconds vs ~3 minutes)



The 4.5 has better 'vibes' but isn't 'better', as a concrete example:

> Mission is the operationalized version of vision; it translates aspiration into clear, achievable action.

The "Mission is the operationalized version of vision" is not in the corpus that I am find and is obviously a confabulated mixture of classic Taylorist like "strategic planning"

SOPs and metrics, which will be tied to compensation and the unfortunate ubiquitous nature of Taylorism would not result in shared purpose, but a bunch of Gantt charts past the planning horizon.

IMHO I would consider "complex nuanced thought" as understanding the historical issues and at least respect the divide between classical and neo-classical org theory. Or at least avoid pollution of more modern theories with classical baggage that is a significant barrier to delivering value.

Mission statements need to share strategic intent in an actionable way, strategy is not operationalization.


I have been experimenting with 4.5 for a journaling app I am developing for my own personal needs, for example, turning bullet/unstructured thoughts into a consistent diary format/voice.

The quality of writing can be much better than Claude 3.5/3.7 at times but struggling with similar confabulation of information that is not in the original text but "sounds good/flows well". Which isn't ideal for a personal journal... I am still playing around with the system prompt but given the astronomical cost (even with me as the only user) with marginal benefits I am probably going to end up sticking with Claude for now.

Unless others have a recommendation for a less robot-y sounding model (that will, however, follow instructions precisely) with API access other than the mainstream Claude/OpenAI/Gemini models?


I've found this on par with 4.5 in tone, but not as nuanced in connecting super wide ideas in systems, 4.5 still does that best: https://ai.google.dev/gemini-api/docs/thinking

(also: the person you are responding to is doing exactly what you're saying you don't want done, take something unrelated to the original text (Taylorism) but could sound good, and jam it in)


The statement "Mission is the operationalized version of vision; it translates aspiration into clear, achievable action" isn't a Taylorist reduction of mission to mechanical processes - it's actually a nuanced understanding of how these organizational elements relate. You're misinterpreting what "operationalized" means in this context. From what i can tell, the 4.5 response isn't suggesting Taylorist implementation with Gantt charts etc it's describing how missions translate vision into actionable direction while remaining strategic. Instead of jargon, it's recognizing that founders need something between abstract vision and tactical execution. Missions serve this critical bridging function. CEO has vision, orgs capture the vision into their missions, people find their purpose when aligned via the 2. Without it, founders either get stuck in aspirational thinking or jump straight to implementation details without strategic guidance. The distinction matters exactly because it helps avoid the dysfunction that prevents startups from scaling effectively. I think you're assuming "operationalized" means tactical implementation (Gantt charts, SOPs) when in this context it means "made operational/actionable at a strategic level". Missions != mission statements. Also, you're creating a false dichotomy between "strategic intent" and "operationalization" when they very much, exist on a spectrum. (If anything, connecting employees to mission and purpose is the opposite of Tayloristic thinking, which viewed workers more as interchangeable parts than as stakeholders in a shared mission towards responding to a shared vision of global change) - You are doing what o1 pro did, and as I said: As a tool for teaching business to founders, personally, I find the 4.5 response to be better.


An example of a typical nieve definition of a mission statement is:

Concise, clear, and memorable statement that outlines a company's core purpose, values, and target audience.

> "made operational/actionable at a strategic level".

Taken the common definition from the first part of this plan, what do you think the average manager would do given that in the social sciences, operationalization is explicitly about measuring abstract qualities. [1]

"operationalization" is a compromise, trying to quantify qualitative properties, it is not typically subject to methods like MECE principal, because there are too many unknown unknowns.

You are correct that "operationalization" and "strategic intent" are not mutually exclusive in all aspects, but they are for mission statements that need to be durable across changes that no CEO can envision.

The "made operational/actionable at a strategic level" is the exact claim of pseudo scientific management theory (Greater Taylorism) that Japan directly targeted to destroy the US manufacturing sector. You can look at the former CEO of Komatsu if you want direct evidence.

GM:s failure to learn form Toyota at NUMII (sp?) is another.

The planning process needs to be informed by stratagy, but planning is not strategic, it has a limited horizon.

But you are correct that it is more nuanced and neither Taylor nor Tolstoy allowed for that.

Neo-classical org theory is when bounded rationality was first acknowledged, although the Prussian military figured that out long before Taylor grabbed his stopwatch to time people loading pig iron into train cars.

I encourage you to read:

Strategy: A History By sir Lawrence Freedman

For a more in depth discussion.

[1] https://socialsci.libretexts.org/Bookshelves/Sociology/Intro...


Your responses are interesting because they drive me to feel reinforced about my opinion. This conversation is precisely why I rate 4.5 over o1 pro. I prompted in a very very very specific way. I'm afraid to say your comments are highly disengaged for the realities of business and business building. Appreciate the historical context and recommended reading (although I assure you, I am extremely well versed). The term 'operationalized' here refers to strategic alignment, not Taylorist quantification, think guiding principles over rigid metrics. You are badly conflating operationalization in social sciences (which is about measurement) with strategic operationalization in management, which is not same. Again: operationalized in this context means making the mission actionable at a strategic level, not about quantification. Modern mission frameworks prioritize adaptability within durable purpose, avoiding the pitfalls you’ve rightly flagged. Successful founders don't get caught in these theoretical distinctions. Founders taught be my, and I guess GPT 4.5, understand correctly, mission as the bridge between aspirational vision and practical action. This isn't "Greater Taylorism" but pragmatic leadership. While your historical references (NUMMI, not NUMII) demonstrate academic knowledge, they miss how effective missions actually guide organizations while remaining adaptable. The 4.5 response captured this practical reality well- it pointed to but it not create artificial boundaries between interconnected concepts. If we had some founders trained by you (o1 Pro) and me (Gpt 4.5) - I would be willing to bet my founders would out preform yours any day of the week.


Tuckman as a 'real' framework is a belief so that is fair.

He clearly communicated in 1977 that his ideas were never formally validated and that he cautioned about their use in other contexts.

I think that the concepts can be useful, if you took them as anything more than a guiding framework that may or may not be appropriate for a particular need.

https://core.ac.uk/download/pdf/36725856.pdf

I personally find value in team and org mission statements, especially for building a shared purpose, but to be honest, any of the studies on that are more about manager satisfaction then anything else.

There is far more data on the failure of strategy execution, and linking strategy with purpose as well as providing runways and goals is one place I find vision and mission statements useful.

As up to 90% of companies fail on strategy execution, and because employee engagement is in free fall, the fact that companies are still in business means little.

Context is king, and this is horses for courses, but I would caution against ignoring more recent, Nobel winning theories like Holmström's theorem.

Most teams don't experience the literal steps Tuckman suggested, rarely all at once, and never as one time singular events. As the above link demonstrated, some portions like the storming can be problematic.

Make them operationalize their mission statement, and they will and it will be in concrete.

Remember von MoltKe "No plan of operations extends with certainty beyond the first encounter with the enemy's main strength."

There is a balance between C2 and mission command styles, the risk is trying to force or worse intentionally causing people to resort to c2 when almost always you need a shifting balance between command and intent based solutions.

The Feudal Mode of Production was sufficient for centuries, but far from optimal.

The NUMMI reference was exactly related to the same reason Amazon profits historically raised higher despite head count increases that should have allowed.

Small cross functional teams, with clearly communicated tasks, and enough freedom to accomplish those tasks efficiently.

You can look at Trist's study about the challenges with incentivizing teams to game the system. Same problem happened under Balmer at MS, and DEC failed the opposite way, trying to do everything at once and please everyone.

https://www.uv.es/=gonzalev/PSI%20ORG%2006-07/ARTICULOS%20RR...

The reality is that the popularity of frameworks rarely relates to their effectiveness, building teams is hard, making teams work as teams across teams is even harder.

Tuckerman may be useful in that...but this claim is wrong:

> "Modern mission frameworks prioritize adaptability within durable purpose, avoiding the pitfalls you’ve rightly flagged"

Modern _ frameworks prioritize adoption and depending on the framework to solve your companies needs will always fail. You need to choose a framework that fits your strategy and objectives, and adapt it to fit your needs.

Learn from others, but don't ignore the reality on the ground.


Regarding Tuckman's model, there are actually numerous studies validating its relevance and practical application: Gren et al. (2017) validated it specifically for agile teams across eight large companies. Natvig & Stark (2016) confirmed its accuracy in team development contexts. Bonebright's (2010) historical review demonstrated its ongoing relevance across four decades of application.

I feel we're talking past each other here. My original point was about which AI model is better for MY WORK. (I run a starup accelerator for first time founders) 4.5, in 30 seconds over minutes, provided more practical value to founders building actual businesses, and saved me time. While I appreciate your historical references and academic perspectives, they don't address my central argument about GPT-4.5's response being more pragmatically useful. The distinction between academic precision and practical utility is exactly what I'm highlighting. Founders don't need perfect theoretical models - they need frameworks that help them bridge vision and execution in the real world. When you bring up feudal production modes and von Moltke, we're moving further from the practical question of which AI response would better guide someone trying to align teams around a meaningful mission that drives business results. It's exactly why I formed the 2 prompts in the manner I did, I wanted to see if it was an academic or an expert.

My assessment stands that GPT-4.5's 30 seconds of thinking reflects well how mission operationalizes vision reflects how successful businesses actually work, not how academics might describe them in theoretical papers. I've read the papers, I've studied the theory deeply, but I also have NYSE and NASDAQ ticker symbols under my belt, from seed. That, is the whole point here.


OK maybe we are using different meanings of the word "operationalize"

If I were say in middle management and you asked me to "operationalize" the impact of mission statements, I would try to associate the existence of a mission statement on a team to some metric like financial performance.

If I was on a small development team and you asked me to "operationalize" our mission statement, I would probably make the same mistake the software industry always does, like trying it to tickets closed, lines of code, or even the Dora metrics.

Under my understanding of "operationalize" and the only way I can find it referenced related to mission statements themselves, I would actually de-emphasize deliverables, quality, stakeholders changing demands etc...

Even if I try to "operationalize" in a more abstract way, like define an impact score, which may not directly map to business objectives or even team building.

Almost every LLM offers a similar definition I offered above E.G.

> "operationalization" refers to the process of defining an abstract concept in a way that allows it to be measured and observed through specific, concrete indicators

Impact scores, which are subjective can lead to Google's shelfware problems, and even scrum rituals often leads to hard but high value tasks being ignored because of the incentives don't allow for it.

In both of your cites, they were situations where existing cultures were enhanced, not fully replaced.

Both were also short term, and wouldn't capture the long tail problems I am referencing.

Heck even Taylorism worked well for the auto industry until outside competition killed it. Well at least for the companies, consumers suffered.

The point is that "operationalization" specifically is counterproductive under a model, where infighting during that phase is bad.

If you care about delivering on execution, it would seems to be important to you. But I realize that you may not be targeting repeat work...I just don't know.

But I am sure some McKinsey BA probably has put that consern in a PDF someplace by now because the GoA Agile assessment guide is being incorporated and even ITIL and TOGAF reference that coal face paper I cited.

The BCGs and McKinseys of the world are absolutely going to shift to detection of common confabulations to show value.

While I do take any tools possible to make me more productive, correctness of content concerns me more than exact verbage.

But yes, different needs, I am in the nitch of rescuing failed initiatives, which admittedly is far from the typical engagement style.

To be honest the lack of scratch space on 4.5 compared to CoT models is the main blocker for me.


I believe 4.5 is a very large and rich model. The price is high because it's costly to inference; however, the bigger reason is to ensure that others don't distill from it. Big models have a rich latent space, but it takes time to squeeze the juice out.


That also means people won't use it. Way to shoot yourself in the foot.

The irony of a company that has distilled the word's information complaining about another company distilling their model...


The small number of use cases that do pay are providing gross margins as well as feedback that helps OpenAI in various ways. I don’t think it’s a stupid move at all.


My assumption: There will be use cases where cost of using this will be smaller than the gain from it. Data from this will make the next version better and cheaper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: