As the focus here is solely on the US, and the comments focus too much on the impossibility of heat dissipation, I want to include some information to broaden the perspective.
- In the EU, the ASCEND study conducted in 2024 by Thales Alenia Space found that data center in space could be possible by 2035. Data center in space could contribute to the EU's Net-Zero goal by 2050 [1]
- heat dissipation could be greatly enhanced with micro droplet technology, and thereby reducing the required radiator surface area by the factor of 5-10
- data center in space could provide advantages for processing space data, instead of sending them all to earth.
- the Lonestar project proved that data storage and edge processing in space (moon, cislunar) is possible.
- A hybrid architecture could dramatically change the heat budget:
+ optical connections reduce heat
+ photonic chips (Lightmatter and Q.ANT)
+ processing-in-memory might reduce energy requirement by 10-50 times
I think the hybrid architecture could provide decisive advantages, especially when designed for AI inference workloads,
It’s so hard to understand that the foreign staff are now afraid for their safety and their lives?
After the killing of Pretti (execution is probably the more correct word), I guess even some US staff can not be so sure about what would happen to them.
__“But are there not many fascists in your country?"
"There are many who do not know – but will find it out when the time comes.”__
What a wonderful teacher! I wish all teachers were like him.
Regarding the collaboration before the exam, it's really strange. In our generation, asking or exchanging questions was perfectly normal. I got an almost perfect score in physics thanks to that. I guess the elegant solution was still in me, but I might not have been able to come up with it in such a stressful situation. 'Almost' because the professor deducted one point from my score for being absent too often :)
However, oral exams in Europe are quite different from those at US universities. In an oral exam, the professor can interact with the student to see if they truly understand the subject, regardless of the written text. Allowing a chatbot during a written exam today would be defying the very purpose of the exam.
My experience is exactly the opposite. With AI, it's better to start small and simplify as much as possible. Once you have working code, refactor and abstract it as you deem fit, documenting along the way. Not the other way around. In a world abound of imitations and perfect illusions, code is the crucial reality to which you need to anchor yourself, not documents.
But that’s just me, and I'm not trying to convince anyone.
in my experience you do both. small ai spike demos to prove a specific feature or logic, then top-down assemble them into a superstructure. The difference is that I do the spikes on pure vibe, while reserving my design planning for the big system.
or at least they can cache the results for a while and update so they can compare the answers over time and not waste the planet's energy due to their dumb design.
Reading the comments here about lawyering to make the prediction seems accurate, I have to say for me the value of prediction is not firstly about its binary accuracy, but more about the insights I get from the prediction and adjustment process. As our language is always limited, putting everything in a binary yes/no will almost always result in some dissatisfaction. We learn never much from binary values but from the gray values and how we must change our judgements to adapt. Perhaps that’s why punishment can cause a reaction but not deep learning and good educators always strive for insights and self-correction.
Useful predictions should also not in black or white but should be presented with an uncertainty, percentage of confidence if one can. It helps one to adjust ones prediction and confidence when new facts come along and I argue every serious predictors should do that.
What is in this particular case that requires outdated tools? If they are code, certainly you can write them on VS Code or whatever you likes, and only need to compile and load on the original tools, can’t you?
It’s more the library and language side. Typically you are years behind and once a version has proven to be working, the reluctance to upgrade is high. It’s getting really interesting with the rise of package managers and small packages. Validating all of them is a ton of effort. It was easier with larger frameworks
Sometimes it's because you need to support ancient esoteric hardware that's not supported by any other tools, or because you've built so much of your own tooling around a particular tool that it resembles application platform in it's own right.
Other times it's just because there are lots of other teams involved in validation, architecture, requirements and document management and for everyone except the developers, changing anything about your process is extra work for no benefit.
At one time I worked on a project with two compiler suites, two build systems, two source control systems and two CI systems all operating in parallel. In each case there was "officially approved safe system" and the "system we can actually get something done with".
We eventually got rid of the duplicate source control, but only because the central IT who hosted it declared it EOL and thus the non-development were forced, kicking and screaming to accept the the system the developers had been using unofficially for years.
- In the EU, the ASCEND study conducted in 2024 by Thales Alenia Space found that data center in space could be possible by 2035. Data center in space could contribute to the EU's Net-Zero goal by 2050 [1]
- heat dissipation could be greatly enhanced with micro droplet technology, and thereby reducing the required radiator surface area by the factor of 5-10
- data center in space could provide advantages for processing space data, instead of sending them all to earth. - the Lonestar project proved that data storage and edge processing in space (moon, cislunar) is possible.
- A hybrid architecture could dramatically change the heat budget: + optical connections reduce heat + photonic chips (Lightmatter and Q.ANT) + processing-in-memory might reduce energy requirement by 10-50 times
I think the hybrid architecture could provide decisive advantages, especially when designed for AI inference workloads,
[1] https://ascend-horizon.eu/
reply