Hacker Newsnew | past | comments | ask | show | jobs | submit | strgcmc's commentslogin

This is why (flawed though the process may be in other ways), a company like Amazon asks "customer obsession" questions in engineering interviews. To gather data about whether the candidate appreciates this point about needing to understand user problems, and also what steps the candidate takes to try and learn the users' POV or walk a mile in their shoes so to speak.

Of course interview processes can be gamed, and signal to noise ratio deserves skepticism, so nothing is perfect, but the core principle of WHY that exists as part of the interview process (at Amazon and many many other companies too) is exactly for the same reason you say it's your "favorite".

Also IIRC, there was some internal research done in the late 2010s or so, that out of the hiring assessment data gathered across thousands of interviews, the single best predictor of positive on-the-job performance for software engineers, was NOT how well candidates did on coding rounds or system design but rather how well they did at the Customer Obsession round.


I think it comes down to, having some insight about the customer need and how you would solve it. Having prior experience in the same domain is helpful but is neither a guarantee nor a blocker, towards having a customer insight (lots of people might work in a domain but have no idea how to improve it; alternatively an outsider might see something that the "domain experts" have been overlooking).

I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.

Reference: https://www.thetimes.com/sport/formula-one/article/professor...

Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).


I got curious and validated your source [1], to pull the exact quote:

"The proportion of Connecticut gambling revenue from the 1.8% of people with gambling problems ranges from 12.4% for lottery products to 51.0% for sports betting, and is 21.5% for all legalized gambling."

Without going into details, I do have some ability to check if these numbers actually "make sense" against real operator data. Will try to sense-check if the data I have access to, roughly aligns with this or not.

- the "1.8% of people" being problem gamblers does seem roughly correct, per my own experience

- but those same 1.8% being responsible for 51% of sportsbook revenue, does not align with my intuition (which could be wrong! hence why I want to check further...)

- it is absolutely true that sportsbooks have whales/VIPs/whatever-you-call-them, and the general business model is indeed one of those shapes where <10% of the customers account for >50% of the revenue (using very round imprecise numbers), but I still don't think you can attribute 51% to purely the "problem gamblers" (unless you're using a non-standard definition of problem-gambler maybe?)


I'm sure nobody cares, but the data I can check shows a couple interesting observations (won't call them conclusions, that's too strong):

- Yes, you can find certain slices of 1.8% of customers, that would represent 50%+ of revenue... But this is usually pretty close to simply listing out the top 1.8% of all accounts by spend

- Therefore, to support the original claim, one would essentially have to definitionally accept that nearly all of the top revenue accounts are "problem gamblers" and almost no one else is... But this doesn't pass a basic smell test, because population wise there are more "poor" problem-gamblers than there are "rich" ones, because there are a lot more poor people in general than rich ones, so it's very unlikely that nearly all of the 1.8% of total population problem gamblers also happen to overlap so much with the top 1.8% customer accounts by revenue.


In such scenarios (data engineering / DS / analytics is my personal background), I have learned not to underestimate the value of, explicitly declaring within Team X, that person X1 is dedicated to line L1, person X2 is dedicated to line L2, etc. (aka similar to your last line about embedding a person with that line of business).

In theory, it doesn't actually "change" anything, because Team X is still stuck supporting exactly the same number of dependencies + the same volume and types of requests.

But the benefit of explicit >>> implicit, the clarity/certainty of knowing who-to-go-to-for-what, the avoidance of context switching + the ability to develop expertise/comfort in a particular domain (as opposed to the team trying to uphold a fantasy of fungibility or that anyone can take up any piece of work at any time...), and also the specificity by which you can eventually say, "hey I need to hire more people on Team X, because you need my team for 4 projects but I only have 3 people..." -- all of that has turned out to be surprisingly valuable.

Another way to say it is -- for Team X to be stretched like that initial state, is probably dysfunctional, and in a terminally-fatal sense, but it's a slow kind of decay/death. Rather than pretending it can work, pretending you can virtualize the work across people (as if people were hyper-threads in a CPU core, effortlessly switching tasks)... instead by making it discrete/concrete/explicit, by nominating who-is-going-to-work-on-what-for-who, I have learned that this is actually a form of escalation, of forcing the dysfunction to the surface, and forcing the organization to confront a sink-or-swim moment sooner than it otherwise would have (vs if you just kept limping on, kept trying to pretend you can stay on top of the muddled mess of requests that keep coming in, and you're just stuck treading water and drowning slowly).

---

Of course, taking an accelerationist stance is itself risky, and those risks need to be managed. But for example, if the reaction to such a plan is something like, "okay, you've created clarity, but what happens if person X1 goes on vacation/gets-hit-by-bus, then L1 will get no support, right?"... That is the entire purpose/benefit of escalating/accelerating!

In other words, Team X always had problems, but they were hidden beneath a layer of obfuscation due to the way work was being spread around implicitly... it's actually a huge improvement, if you've transformed a murky/unnameable problem into something as crispy and quantifiable as a bus-factor=1 problem (which almost everyone understands more easily/intuitively).

---

Maybe someday Team X could turn itself into a self-service platform, or a "X-as-a-service" offering, where the dependent teams do not need to have you work with or for them, but rather just consume your outputs, your service(s)/product(s), etc. at arms-length. So you probably don't always want to stay in this embedded or explicit "allocation" model.


The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we?

Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us.


A purely technology-minded compromise to this question (aka how to support both the "good" and "bad" kinds of recording), is probably something along the lines of expiry and enforcing a lack of permanence as the default (kind of like, the digital age recording-centric version of "innocent until proven guilty", which honestly is one of the greatest inventions in the history of human legal systems). Of course, one should never make societal decisions purely from a technological practicality standpoint.

Since you can't be sure what is "bad"/illegal, and people will just record many things anyways without thinking too much about it --> then the default should be auto-expiring/auto-deletion after X hours/days, unless some reason or some confirmation is provided to justify its persistence.

For example, imagine we lived in a near-future where AI assistants were commonplace. Imagine that recording was ubiquitous but legally mandated to default into being "disappearing videos" like Snapchat, but for all the major platforms (YouTube, TikTok, X, Twitch, Kick, etc.). Imagine that every day, you as a regular person doing regular things, get maybe 10000 notifications of, "you have been recorded in video X on platform Y, do you consent for this to be persisted?", and also law enforcement has to go through a judge (kind of like a search warrant) to file things like "persistence warrants", and then maybe there is another channel/method for concerned citizens who want to persist video of a "bad guy" doing "bad things" where they can request for persistence (maybe it's like an injunction against auto-deletion until a review body can look at the request)... Obviously this would be a ton of administrative overhead, a ton of micro-decisions to be made -- which is why I mentioned the AI-assistant angle, because then I can tell my personal AI helper, "here are my preferences, here is when I consent to recording and here is when I don't... knowing my personal rules, please go and deal with the 10000 notifications I get every day, thanks". Of course if there's disagreement or lack of consensus, some rules have to be developed about how to combine different parties wishes together (e.g. take a recording of a child's soccer game, where maybe 8 parents consent and 3 parents don't to persistence... perhaps it's majority rule so persistence side wins, but then majority has to pay the cost of API tokens to a blurring/anonymization service that protects the 3 who didn't want to be persisted -- that could be a framework for handling disputed outcomes?)

I'm also purposefully ignoring the edge-case problem of, what if a bad actor wants to persist the videos anyways, but in short I think the best we can do is impose some civil legal penalties if an unwilling participant later finds out you kept their videos without permission.

Anyways, I know that's all super fanciful and unrealistic in many ways, but I think that's a compromise sort of world-building I can imagine, that retains some familiar elements of how people think about consent and legal processes, while acknowledging the reality that recording is ubiquitous and that we need sane defaults + follow-up processes to review or adjudicate disputes later (and disputes might arise for trivial things, or serious criminal matters -- a criminal won't consent to their recording being persisted, but then society needs a sane way to override that, which is what judges and warrants are meant to do in protecting rights by requiring a bar of justification to be cleared).


True of course that dollars is the end goal, but frankly it'd be better if they just took the dollars out of my pocket directly, instead of poisoning my brain first so that they can trick me into giving some dollars...

Obviously I'm being hyperbolic, but I think eventually if society survives past this phase, our descendants will look back and judge us for letting psychological manipulation be a valid economic process as a way to generate dollars, in much the same way we might judge our ancestors for ever building up a whole industry to hunt whales for oil for fuel (meaning, they might acknowledge that fuel is important and necessary to power an industrializing society, but they would mock us for not understanding how to refine petroleum sooner, and how silly going through the tech tree of fucking whale hunting is, just to get some fuel).

It is fucking silly/absurd/dangerous, that we go through the tech tree branch of psychological manipulation, just to be able to sell some ads or whatever.


I think you're veering too far into politics on what was originally not a very political OP/thread, but I'll indulge you a tiny bit and also try to bring the thread back to the original theme.

You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.

Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".

And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).


Why would you think it matters what you think? Keep your pretentious, supremacist narcissism to yourself and tell those you abuse what to do, because that is not going to matter here.


This is a really strange reply.


I think they just read my first sentence and decided to take offense immediately. Shrug.

All I meant was, I didn't want to go down a path of talking about Trump... that's a very very dead horse to beat. I thought there were interesting elements to this person's ideas that were worth further discussion, that could be divorced/split-off from the Trump lightning rod, so I tried to do that. I generally thought I agreed with their original ideas, and wanted to build on them or respond to them, without getting sucked into wasting breath on Trump (nobody benefits, regardless if you have left or right leaning views).

I'm sure I could fix some gaps in the way I explained myself, but oh well, just another day on the internet.


As a manager, I am considering to enforce a rule on my team that -- no README in any repo should ever go stale ever again --> it should be near-trivial for every dev to ask Claude Code to read the existing README, read/interpret the code as it practically currently stands, read what's changed in the PR, then update the README as necessary. This does not mean Claude will be perfect or that engineers don't need to check that its summaries make sense (they do, and the human is always accountable for the changes at the end of the day); but this does mean that, the typical amount of laziness that we are all guilty of often, should not be eliminated as a reason as to why READMEs go stale.


Why have such a rule if at any moment of time the LLM could update the readme ad hoc? Btw, your ingested readmes will affect your LLM's code generation and I made the observation that more often than not it is better to exclude the readmes from the context window.


No LLM will by default touch a README.md

They will when you run /init, but after that they won't look at it unless directed to do so.


Bold statement


As a thought-exercise -- assume models continue to improve, whereas "using claude-code daily" is something you choose to do because it's useful, but is not yet at the level of "absolute necessity, can't imagine work without it". What if it does become, that level of absolute necessity?

- Is your demand inelastic at that point, if having claude-code becomes effectively required, to sustain your livelihood? Does pricing continue to increase, until it's 1%/5%/20%/50% of your salary (because hey, what's the alternative? if you don't pay, then you won't keep up with other engineers and will just lose your job completely)?

- But if tools like claude-code become such a necessity, wouldn't enterprises be the ones paying? Maybe, but maybe like health-insurance in America (a uniquely dystopian thing), your employer may pay some portion of the premiums, but they'll also pass some costs to you as the employee... Tech salaries have been cushy for a while now, but we might be entering a "K-shaped" inflection point --> if you are an OpenAI elite researcher, then you might get a $100M+ offer from Meta; but if you are an average dev doing average enterprise CRUD, maybe your wages will be suppressed because the small cabal of LLM providers can raise prices and your company HAS to pay, which means you HAVE to bear the cost (or else what? you can quit and look for another job, but who's hiring?)

This is a pessimistic take of course (and vastly oversimplified / too cynical). A more positive outcome might be, that increasing quality of AI/LLM options leads to a democratization of talent, or a blossoming of "solo unicorns"... personally I have toyed with calling this, something like a "techno-Amish utopia", in the sense that Amish people believe in self-sufficiency and are not wholly-resistant to technology (it's actually quite clever, what sorts of technology they allow for themselves or not), so what if we could take that further?

If there was a version of that Amish-mentality of loosely-federated self-sufficient communities (they have newsletters! they travel to each other! but they largely feed themselves, build their own tools, fix their own fences, etc.!), where engineers + their chosen LLM partner could launch companies from home, manage their home automation / security tech, run a high-tech small farm, live off-grid from cheap solar, use excess electricity to Bitcoin mine if they choose to, etc.... maybe there is actually a libertarian world that can arise, where we are no longer as dependent on large institutions to marshal resources, deploy capital, scale production, etc., if some of those things are more in-reach for regular people in smaller communities, assisted by AI. This of course assumes that, the cabal of LLM model creators can be broken, that you don't need to pay for Claude if the cheaper open-source-ish Llama-like alternative is good enough


Well my business doesn't rely on AI as a competitive advantage, at least not yet anyways. So as it stands, if claude got 100x as effective, but cost 100x more, I'm not sure I could justify the cost because my market might just not be large enough. Which means I can either ditch it (for an alternative if one exists) or expand into other markets... which is appealing but a huge change from what I'm currently doing.

As usual, the answer is "it depends". I guarantee though that I'll at least start looking at alternatives when there's a huge price hike.

Also I suspect that a 100x improvement (if even possible) wouldn't just cost 100 times as much, but probably 100,000+ times as much. I also suspect than an improvement of 100x will be hyped as an improvement of 1,000x at least :)

Regardless, AI is really looking like a commodity to me. While I'm thankful for all the investment that got us here, I doubt anyone investing this late in the game at these inflated numbers are going to see a long term return (other than ponzi selling).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: