Hacker Newsnew | past | comments | ask | show | jobs | submit | big_toast's commentslogin

If you were a student in 2025, is CS193P (looks swiftUI rendering heavy) still the hands-on foundation for the next-big-tinkerer or would it look more like building around affordances of AI? (or something else).


Uber founder explaining tipping logic. I haven't seen such a precise formulation around the logic for including tipping in e-delivery/PoS platforms.

I'd be curious if someone more familiar with economic theory thinks the statement about economic surplus is correct. And if there's an antidote (or if the cure is worse than the pain). He agrees with another comment elsewhere about increased decision paralysis.

Other countries don't have the same tipping culture and presumably didn't inherit the biases into their e-delivery/PoS platforms.

Tweet:

"Delivery app tipping isn’t about feedback mechanisms.. it’s a tool for maximizing price paid by consumers… eaters are economically irrational with tip, for every $1 in tip, they economically behave as if it were $0.80 (directionally true but hypothetical figure) … this means less price elasticity for the same price… couriers are also economically irrational with tip, for every $1 in tip they economically behave as if it were $1.20 (directional)

The tip is a hack on human psyche which the apps must implement and maximize or miss out on economic surplus that their competitor will use to defeat them.

The app that decides to pay the same net amount to the courier but as a square deal vs a drop fee + tip will lose market share every day to an equal marketplace player that implements and maximizes tip"


It’s an encoder-decoder transformer trained on audio (language?) and transcription.

Seems kinda weird for it not to meet the definition in a tautological way even if it’s not the typical sense or doesn’t tend to be used for autoregressive token generation?


Is it Transformer-based? If not then it's a different beast architecturally.

Audio models tend to be based more on convolutional layers than Transformers in my experience.


The openai/whisper repo and paper referenced by the model card seem to be saying it's transformer based.


I think part of getting by with a lower PPD is the IRL pixels are fixed and have hard boundaries that OS affordances have co-evolved with.

(pixel alignment via lots of rectangular things - windows, buttons; text rendering w/ that in mind; "pixel perfect" historical design philosophy)

The VR PPD is in arbitrary orientations which will lead to more aliasing. MacOS kinda killed their low-dpi experience via bad aliasing as they moved to the hi-dpi regime. Now we have svg-like rendering instead of screen-pixel-aligned baked rasterized UIs.


macOS exposes a lot of affordances to code/xrpc/services/etc that Shortcuts (and previously automator) used. They let you do basically anything you'd want on macOS programmatically, without going through accessibility frameworks, code signing and sand-boxing issues. iOS as well to some extent.

Presumably if OpenAI is dog-walked/locked out of these by Apple at some point, they would be stuck in the Chrome/Chromebook feature jail. My guess is this gives OpenAI a team to put in charge to give them a chance to wedge themselves into the OS before Apple changes their mind or puts scare-box dialogs everywhere.

Either that or there's nothing so complicated and OpenAI just wants to re-build this stack inside ChatGPT as quickly and well as they can.


Not a big cyclist but is that still true for lower speed city riding (typical to flat european cycling prone countries), hillier SF, or mountain biking?

It seems obviously true to typical racing or distance scenarios. And i notice the wind even at lower speeds on e-bikes in SF.

But between their quad scenario and what I imagine as the urban car replacement scenario it doesn't seem as obvious.


Yes, it is true at all speed and under all conditions. The system simply does not have the mass that would give it a great deal of gravitational potential energy, and it reaches a power equilibrium with the air at low speeds. Example:

100kg rider at 15 kph = .24W-h kinetic energy. At this speed there is probably roughly 11N of air and rolling resistance, so the steady state power is about 3W-h per km. If you go 1km between stops, or more, the amount you can expect to gain by regeneration is extremely small. It could perhaps extend your range by 5%, generously.


Does that assume no pedaling though? In my experience the pain of starts and stops dominates the joy of steady state pedaling. Presumably the 3Wh/km is free/"exercise" or some portion. Whereas the .24Wh (re-gainable w/ some loss) is all sweat and pain imo.

If I'm understanding the math, maybe that scales the regenerative range extension % by your tolerance for pedaling?


The difference is that humans (unlike motors) have pretty low max power limits.


I assume this comment in relation to the starting from a stop being unpleasant?

If it's w.r.t. effect of low max power on low cumulative generation, I agree it does seem like a little silly to arbitrage your power generation this way. But maybe the tradeoff is worth it in some circumstances in their view?

Or maybe it's just a low cost addition as other commenters say.


Let's say 130kg (80kg driver + 50kg bike) going 30km/h (e-bike limit) is ~1.25Wh.


I dunno, a whole subtree of the internet died and I’m not sure it really came back. It was a beautiful Galápagos Islands.


Wait, does lima do isolation in a macos context too?

It looks like linux vms, which apple's container-cli (among others) covers at a basic level.

I'd like apple to start providing macOS images that weren't the whole OS.. unless sandbox-exec/libsandbox have affordance for something close enough?

You can basically ask claude/chatgpt to write its jail (dockerfile) and then run that via `container` without installing anything on macos outside the container it builds (IIRC). Even the container-cli will use a container to build your container..


(Surely this issue must've been discussed/debated elsewhere ad nauseum because it seems an odd design decision to leave out such a common macos binding...)

But having only used ghostty as-is and getting bamboozled by the copy paste situation, this is game changing. I was just going to wait till preferences had a GUI/TUI.. So thanks!


Is this more of an accounting thing?

Is there some (tax?) efficiency where OpenAI could take money from another source, then pay it to Nvidia, and receive GPUs. But instead taking investment from Nvidia acts as a discount in some way.

(In addition to Nvidia being realistically the efficient/sole supplier of an input OpenAI currently needs. So this gives

  1. Nvidia an incentive to prioritize OpenAI and induces a win/win pricing component on Nvidia's GPU profit margin so OpenAI can bet on more GPUs now

  2. OpenAI some hedge on GPU pricing's effect on their valuations as the cost/margin fluctuates with new entrants
)?


It sounds like Nvidia has so much cash already that they would prefer to own x% of OpenAI instead.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: