There are significant trade-offs with this technology.
It's storing heat, so if you need electricity then you eat a lot of efficiency. I think Vernon said ~45% round trip efficiency. Batteries are 90%+.
The storage is at a high temperature (500-600C) which means that you can't use heat-pumps to produce the heat to be stored. This means that you miss out on ~400% energy gains possible from converting electricity to heat.
So the efficiency is pretty low.
That said, solar PV is really cheap and moving large amounts of earth into a pile is also a very much solved problem so in some cases, notably higher latitudes which have very long days and low heat/electricity demand in the summer and the opposite in the winter, it could still be a very good solution.
The whole point is that the thermal energy is used directly, via district heating. These are not meant to store energy for electricity production (though they could do that if really needed – emergency power for various facilities? Maybe not worth it compared to diesel.)
Heat from existing thermal power plants can be stored directly and later distributed with no conversion loss; excess electricity from renewables can be turned to heat at 100% efficiency, but the problem is that peak heat demand and peak electricity supply do not typically coincide. Heat batteries are meant to solve that problem.
"In response, Lord Watts added that young people “are not stupid”, explaining that if they assume they will “earn low incomes and there’s no future”, then youths will likely lower their aspirations as a result."
There's probably a lot more interesting info hiding behind that statement. If housing is massively unaffordable (as it in the UK), social mobility is rather low, why go out there and destroy yourself in low wage jobs?
Not saying there's no spoiled youth waiting for their lottery ticket that will never come, but there's a rational aspect to it as well.
You're on the right track in channeling this negative energy into productive work on the company.
There are a few things to keep in mind, some of them you've already argued yourself. One, Engage is a common word, so that's on you, and two, more importantly, in today's SEO/ASO/other algorithmic wars, if you truly are the leader in this space, people will copy as much of your name/branding as possible to steal your customers' attention.
You are absolutely not in a unique position in this regard, if you want evidence, look at this top 50 generative AI mobile apps ranking: https://isarta.com/news/wp-content/uploads/2024/05/image-6-1.... Count the amount of Chat + "something" names, and the amount of practically identical logos as ChatGPT in addition. That's the game these days, if you are successful, you will be copied relentlessly.
And the copycats might not even be copycats in the sense you're thinking. Automated customer engagement, LinkedIn or otherwise, is probably among the top 3 ideas that came to mind to anyone working in the sales/CRM space as soon as LLMs became convincingly human in conversation. So it's just thousands of people realizing the same opportunity at roughly the same time, going out to build it, maybe checking if there's a significant player in the space already, learning from their mistakes, copying the parts that made sense, and firing on all cylinders to become the leader in the space.
Yes, someone with more money might even beat you to the #1 spot, and the people who you think are your competitors right now might not even be relevant in a year, when various CRM companies build this functionality into their systems as a feature. In an even worse scenario, companies like Persana might be acquired with way worse numbers than you have, because of the network, the budget, the lower risk due to being ex-LinkedIn, etc.
None of this is particularly "fair" in the school playground sense of the word, but rarely anything is in business. If you have a true competitive advantage in terms of product, you have better odds than most, but maybe someone is going to beat you on distribution, pricing, marketing, targeting, to the point that product will barely matter.
It's on you to figure out what you want to focus on, and what outcome you will be happy with. If you have the metrics, you can probably fundraise easily. You might not want to, because you want to bootstrap, but then stop wasting energy on thinking about competitors who are doing it differently. Whatever choices you make, make them, and focus on your own path.
There is a tremendous share of medicine specialties facing shortages, and fear of AI is not a relevant trend causing it. Even the link explaining shortages in the above article is pretty clear on that.
I do agree with the article's author's other premise, radiology was one of those fields that a lot of people (me included) have been expecting to be largely automated, or at least the easy parts, as the author mentions, and that the timelines are moving slower than expected. After all, pigeons perform similarly well to radiologists: https://pmc.ncbi.nlm.nih.gov/articles/PMC4651348/ (not really, but it is basically obligatory to post this article in any radiology themed discussion if you have radiology friends).
Knowing medicine, even when the tech does become "good enough", it will take another decade or two before it becomes the main way of doing things.
The reason AI is hyped is because it's easy to get the first 80% or 90% of what you need to be a viable alternative at some task. Extrapolating in a linear fashion, AI will do the last 10-20% in a few months or maybe a couple years. But the low-hanging fruit is easy and fast. It may never be feasible to complete the last few percent. Then it changes from "AI replacement" to "AI assisted". I don't know much about radiology, but I remember before the pandemic one of the big fears was what we'd do with all the unemployed truck drivers.
> That's a hefty assumption, especially if you're including accuracy.
That's exactly what the comment is saying. People see AI do 80% of a task and assume development speed will follow a linear trend and the last 20% will get done relatively quickly. The reality is the last 20% is hard-to-impossible. Prime example is self-driving vehicles, which have been 80% done and 5 years away for the past 15 years. (It actually looks further than 5 years away now that we know throwing more training data at the problem doesn't fix it.)
Waymo barely works, with 24/7 monitoring by humans in a "fleet response" center[0], in 4 cities in the world. That's only 95% done if you're counting good enough for government work.
The monitoring might be 24/7 but its reaction time is nothing usable in a life-and-death situation. Or I just cannot imagine a human being notified "I think I'm crashing into something" and able to take over and do anything of significance within that second to avoid the crash (except hitting on the brakes which the car could do just as well). So don't read too much into the response team, it has definitely its use but won't save you from plunging into that sinkhole who just appeared.
That's their point, I think; since the 50s or so, people have been making this mistake about AI and AI-adjacent things, and it never really plays out. That last '10%' often proves to be _impossible_, or at best very difficult; you could argue that OCR has managed it, finally, at least for simple cases, but it took about 40 years, say.
> The reason AI is hyped is because it's easy to get the first 80% or 90% of what you need to be a viable alternative at some task.
No, it's because if the promise of certain technologies is reached, it'd be a huge deal. And of course, that promise has been reached for many technologies, and it's indeed been a huge deal. Sometimes less than people imagine, but often more than the naysayers who think it won't have any impact at all.
> There is a tremendous share of medicine specialties facing shortages
The supply of doctors is artificially strapped by the doctor cartel/mafia. There are plenty who want to enter but are prevented by artificial limits in the training.
Medical professionals are highly paid, thus an education in medicine is proportionally expensive. An education in medicine is expensive, thus younger medical professionals need to be highly paid in order to afford their debt. Until the vicious cycle is here is broken (e.g. less accessible student loans? and more easily defaultable is one way to spell less accessible), things are not going to improve. And there’s also the problem that you want your doctors to be highly paid, because it’s a stressful, high-responsibility job with stupidly difficult education.
US doctors are ridiculously overpaid compared to the rest of the developed world, such as the UK or western EU. There's no evidence that this translates to better care at all. It's all due to their regulatory capture. One possible outcome is that healthcare costs continue to balloon and eventually it pops and the mafia gets disbanded and more immigrant doctors will be allowed to practice, driving prices to saner levels.
How doctors are licensed in the US compared to western Europe might explain why health care costs are higher in the US, but it does not explain why health care costs are rising so much. That's because health care costs are rising at similar rates in western Europe (and most of the rest of the first world).
For example from 2000 to 2018 here's the ratio of per capita health care costs in 2018 to the costs in 2000 for several countries:
2.1 Germany
1.8 France
2.0 Canada
1.7 Italy
2.6 Japan
2.6 UK
2.3 US
Here's cost ratios over several decades compared to 1970 costs for the US, the UK, and France:
Doctor salary is not the only or perhaps even the main factor in healthcare expensiveness, but taking on the overall cost disease in healthcare would broaden the scope too wide for this thread, I think.
Also, I admit that the balloon may in fact never pop, since one theory says that healthcare costs so much simply because it can. It just expands until it costs as much as possible but not more. I'm leaning towards accepting Robin Hanson's signaling-based logic to explain it.
Yep, this is precisely what they argue. They don't simply say they want to keep their high salary and status due to undersupply. They argue that it's all about standards, patient safety etc. In the US, even doctors trained in Western Europe are kept out or strangled with extreme bureaucratic requirements. Of course, again the purported argument is patient safety. As if doctors in Europe were less competent. Health outcome data for sure doesn't indicate that, but smoke and mirrors remain effective.
I wouldn’t dismiss the premise so quickly. Other factors certainly play a role, but I imagine that after 2016, anyone considering a career in radiology would have automation as a prominent concern.
Automation may be a concern. Not because of Hinton, though. There is only so much time in the day. You don't become a leading expert in AI like Hinton has without tuning out the rest of the world, which means a random Average Joe is apt to be in a better position to predict when automation is capable of radiology tasks than Hinton. If an expert in radiology was/is saying it, then perhaps it is worth a listen. But Hinton is just about the last person you are going to listen to on this matter.
> even when the tech does become "good enough", it will take another decade or two before it becomes the main way of doing things.
What you're advocating for would be a crime against humanity.
Every four years, the medical industry kills a million Americans via preventable medical errors, roughly one third of which are misdiagnoses that were obvious in hindsight.
If we get to a point at which models are better diagnosticians than humans, even by a small margin, then delaying implementation by even one day will constitute wilful homicide. EVERY SINGLE PERSON standing in the way of implementation will have blood on their hands. From the FDA, to HHS, to the hospital administrators, to the physicians (however such a delay would play out) - every single one of them will be complicit in first-degree murder.
Waymo was providing 10,000 weekly autonomous rides in August 2023, 50,000 in June 2024, and 100,000 in August 2024.
Not everything has this trajectory, and it took 10 years more than expected. But it's coming.
Not saying AI will be the same, but underestimating the impact of having certain outputs 100x cheaper, even if many times crappier seems like a losing bet, considering how the world has gone so far.
Waymo is a great example, actually. They serve Phoenix, SF and LA. Those locations aren’t chosen at random, they present a small subset of all the weather and road conditions that humans can handle easily.
So yes: handling 100,000 passengers is a milestone. The growth from 10,000 to 100,000 implies it’s going to keep growing exponentially. But eventually they’re going to encounter stuff like Midwest winters that can easily stop progress in its tracks.
About driverless cars, new tech adoptions often start slow, until the iceberg tips and then it's very quick change. Like mobile phones today.
I remember thinking before smartphones that had entire-day battery and good touchscreens: These people really think population will use phones more than desktop computers? Here we are.
I wouldn't say so, because the cars are not at all autonomous in our understanding of autonomous.
The cars aren't making all their decisions in real-time like a human driver. They, Waymo, meticulously mapped and continue to map every inch of the traversable city. They don't know how to drive, they know how to drive THERE.
It would be like if I went to the DMV to take a driving test. I would fail immediately, because the parking lot is not one I've seen and analyzed before.
"true" self driving is not possible with our current implementation of automobiles. You cannot safely mix automobiles that self-drive with human drivers. And the best solution is to converge towards known routes. We don't even necessarily how to program the routes - we can instead encode them in the road itself.
It might occur to you that I'm speaking about rail. The reality is it's trivial to automate rail systems, but the variables of free-form driving can't be automated.
Maybe this will be helpful to the author (partially already mentioned):
- Nonprofit business model does not equal "everything is free for the user forever", I'm guessing you already know that, but the wording on why you don't believe in nonprofit business models explicitly mentioned keeping everything free as the reason. You can earn revenue from users in a nonprofit business.
- You have a big audience with good engagement for the segment, there are multiple ways to make money without abandoning the core mission (job boards, screencast upsells for advanced courses, premium content, whatever else, look at how Remoteok.com makes money, copy-paste as the founder is super open on his process)
- Being a for-profit business and fundraising, will temporarily solve your issue of having funds to run a business. It will not solve the issue of not knowing how to/being afraid to charge users or other parties for the value they get out of your product. You could already be solving this problem today, and you have a 2million audience pipeline built in to solve that issue.
I'm not dismissing the challenge of some business segments being extremely difficult to make money in despite the value being meaningful, I work in healthcare, so I know, but since your new business will effectively be in the same segment, do focus on the revenue aspect much sooner and much than you think you'll need to, because you already know what happens if you don't.
And big respect for what you've built in a super crowded space, you obviously have the product and user empathy chops needed, wishing you the best of luck on nailing the business chops!
Do you get your money back if you go to the store and buy some new food/fruit/snack you don't like the taste of?
No, you throw it away, and probably won't buy it again. If you don't like NYT, don't buy from them.
If NYT is like an avocado for you, sometimes ripe and delicious, sometimes unripe, sometimes rotten, you get to decide how often you're gonna buy avocados, or if you'll develop your own methods of avocado testing before buying to increase your odds. In no case do you get to take the avocado skin back to the store asking for a refund.
Perhaps a simpler analogy, you see a new bag/flavour of chips in the store, "super crunchy" "delicious", you buy it, go home, tastes horrible, barely crunchy, do you get to take it back and get your money back?
Very much agreed on a lot of the points there, and on that note, how new frameworks market to developers is probably a great lesson in that. Pieter Levels (of nomadlist.com and similar fame) recently talked about it on a podcast, how he basically sticks to PHP and jQuery, and how often he sees developers jumping on a new framework, not realizing it's likely a marketing tactic that's pulling them in.
The part that feels most like the advice above: "And same thing what happens with nutrition and fitness or something, same thing happens in developing. They pay this influencer to promote this stuff, use it, make stuff with it, make demo products with it, and then a lot of people are like, “Wow, use this.” And I started noticing this, because when I would ship my stuff, people would ask me, “What are you using?” I would say, “Just PHP, jQuery. Why does it matter?”
And people would start attacking me like, “Why are you not using this new technology, this new framework, this new thing?”
Worse yet is when the influencer is being paid to peddle the bundling of a handful of technologies that have existed for years and that you're already using, and everyone who doesn't understand that you're already doing that won't listen when you tell them.
What an evolution for the tech industry. From being the underdogs trying to change the world for the better, to becoming top dogs with all the same egoism and "money and power above all" that Big Oil and similar industries have been known for for decades.
From that perspective, Trump loves money, loves deregulation, and will open the door for whatever dystopian play if it provides enough benefits to him, so it only makes sense for him to be more favoured.
All that will come from this will be more evidence to how power corrupts everyone, and a lot of money made.
Really interested in seeing how it fares in reality, almost sounds too good to be true.