Control, responsibility and accountability have to align.
* You should only be accountable for what you are responsible for.
* You should only be responsible for what you have control of.
Bad managers hate this structure because in makes them accountable for themselves and their subordinates and prevents deflection of blame to low level employees.
Molecular dynamics describes very short, very small dynamics, like on the scale of nanoseconds and angstroms (.1nm)
What you’re describing is more like whole cell simulation. Whole cells are thousands of times larger than a protein and cellular processes can take days to finish. Cells contain millions of individual proteins.
So that means that we just can’t simulate all the individual proteins, it’s way too costly and might permanently remain that way.
The problem is that biology is insanely tightly coupled across scales. Cancer is the prototypical example. A single mutated letter in DNA in a single cell can cause a tumor that kills a blue whale. And it works the other way too. Big changes like changing your diet gets funneled down to epigenetic molecular changes to your DNA.
Basically, we have to at least consider molecular detail when simulating things as large as a whole cell. With machine learning tools and enough data we can learn some common patterns, but I think both physical and machine learned models are always going to smooth over interesting emergent behavior.
Also you’re absolutely correct about not being able to “see” inside cells. But, the models can only really see as far as the data lets them. So better microscopes and sequencing methods are going to drive better models as much as (or more than) better algorithms or more GPUs.
I had an old colleague (at various points he was my boss, colleague, and subordinate at different places) that really opened my eyes up to the power of saying you don’t know how to do something.
I used to also fear appearing incompetent if I admitted to not knowing too many things, so I would avoid showing my knowledge gaps whenever possible.
However, this colleague was the exact opposite. He would gleefully tell people he had no idea how to do certain things, would be a ready listener when the person he was talking to explained how it worked, and would heap praise on the person for their knowledge and teaching skills. He would always defer to other people as experts when he didn’t know, and would make sure our bosses and coworkers knew who had helped him and how much they knew about the topic.
What I saw and experienced was that this did NOT, in any way shape or form, make people think less of him. It did the exact opposite. First, it made people REALLY happy to help him with stuff; he made you feel so smart and capable when you explained things and helped him, everyone jumped at the opportunity to show him things. He learned so much because he made everyone excited to teach him, and made his coworkers feel smart and appreciated for their knowledge.
And then, when he did speak with confidence on a subject, everyone knew he wasn’t bullshitting, because we knew he never faked it. Since he gave everyone else the chances to be the expert and deferred all the time, you didn’t get the one-upmanship you often get when tech people are trying to prove their bonafides. People were happy to listen to him because he first listened to them.
I have really tried to emulate him in my career. I go out of my way to praise and thank people who help me, always try to immediately admit where my skills and experience lack, and don’t try to prove myself in subjects I don’t really know that well. It has worked well for me in my career, as well.
There are a lot of design alternatives possible to TCP within the "create a reliable stream of data on top of an unreliable datagram layer" space:
• Full-duplex connections are probably a good idea, but certainly are not the only way, or the most obvious way, to create a reliable stream of data on top of an unreliable datagram layer. TCP's predecessor NCP was half-duplex.
• TCP itself also supports a half-duplex mode—even if one end sends FIN, the other end can keep transmitting as long as it wants. This was probably also a good idea, but it's certainly not the only obvious choice.
• Sequence numbers on messages or on bytes?
• Wouldn't it be useful to expose message boundaries to applications, the way 9P, SCTP, and some SNA protocols do?
• If you expose message boundaries to applications, maybe you'd also want to include a message type field? Protocol-level message-type fields have been found to be very useful in Ethernet and IP, and in a sense the port-number field in UDP is also a message-type field.
• Do you really need urgent data?
• Do servers need different port numbers? TCPMUX is a straightforward way of giving your servers port names, like in CHAOSNET, instead of port numbers. It only creates extra overhead at connection-opening time, assuming you have the moral equivalent of file descriptor passing on your OS. The only limitation is that you have to use different client ports for multiple simultaneous connections to the same server host. But in TCP everyone uses different client ports for different connections anyway. TCPMUX itself incurs an extra round-trip time delay for connection establishment, because the requested server name can't be transmitted until the client's ACK packet, but if you incorporated it into TCP, you'd put the server name in the SYN packet. If you eliminate the server port number in every TCP header, you can expand the client port number to 24 or even 32 bits.
• Alternatively, maybe network addresses should be assigned to server processes, as in Appletalk (or IP-based virtual hosting before HTTP/1.1's Host: header, or, for TLS, before SNI became widespread), rather than assigning network addresses to hosts and requiring port numbers or TCPMUX to distinguish multiple servers on the same host?
• Probably SACK was actually a good idea and should have always been the default? SACK gets a lot easier if you ack message numbers instead of byte numbers.
• Why is acknowledgement reneging allowed in TCP? That was a terrible idea.
• It turns out that measuring round-trip time is really important for retransmission, and TCP has no way of measuring RTT on retransmitted packets, which can pose real problems for correcting a ridiculously low RTT estimate, which results in excessive retransmission.
• Do you really need a PUSH bit? C'mon.
• A modest amount of overhead in the form of erasure-coding bits would permit recovery from modest amounts of packet loss without incurring retransmission timeouts, which is especially useful if your TCP-layer protocol requires a modest amount of packet loss for congestion control, as TCP does.
• Also you could use a "congestion experienced" bit instead of packet loss to detect congestion in the usual case. (TCP did eventually acquire CWR and ECE, but not for many years.)
• The fact that you can't resume a TCP connection from a different IP address, the way you can with a Mosh connection, is a serious flaw that seriously impedes nodes from moving around the network.
• TCP's hardcoded timeout of 5 minutes is also a major flaw. Wouldn't it be better if the application could set that to 1 hour, 90 minutes, 12 hours, or a week, to handle intermittent connectivity, such as with communication satellites? Similarly for very-long-latency datagrams, such as those relayed by single LEO satellites. Together this and the previous flaw have resulted in TCP largely being replaced for its original session-management purpose with new ad-hoc protocols such as HTTP magic cookies, protocols which use TCP, if at all, merely as a reliable datagram protocol.
• Initial sequence numbers turn out not to be a very good defense against IP spoofing, because that wasn't their original purpose. Their original purpose was preventing the erroneous reception of leftover TCP segments from a previous incarnation of the connection that have been bouncing around routers ever since; this purpose would be better served by using a different client port number for each new connection. The ISN namespace is far too small for current LFNs anyway, so we had to patch over the hole in TCP with timestamps and PAWS.
I like the term prompt performance; I am definitely going to use it:
> prompt performance (n.)
> the behaviour of a language model in which it conspicuously showcases or exaggerates how well it is following a given instruction or persona, drawing attention to its own effort rather than simply producing the requested output.
There's already a named attack on kyber as well. `KyberSlash`
http://kyberslash.cr.yp.to/
as best I can tell it seems to be implementation specific rather than about kyber as a spec, but still worth knowing about
> said staff was hired to work on the stack you are using
Looking back at doing various hiring decisions at various levels of organizations, this is probably the single biggest mistake I've done multiple times, hiring specific people using specific technology because we were specifically using that.
You'll end up with a team unwilling to change, because "you hired me for this, even if it's best for the business with something else, this is what I do".
Once I and the organizations shifted our mindset to hiring people who are more flexible, even if they have expertise in one or two specific technologies, they won't put their head in the sand whenever changes come up, and everything became a lot easier.
C locales are one of those optimistic features that we have inherited that in retrospect ended up being misguided and more trouble than worth; any program that really needs to deal with localization probably will end up pulling something like ICU to deal with all sorts of cases, and on the other hand locales cause all sorts of weird issues for programs not really expecting to be localized. As a bonus locale support incurs heavy performance hit.
In this case its extra-awkward to have an attempt of having unicode support on a function that takes a single char as an input; it can't actually handle arbitrary unicode codepoints anyways.
I feel a common theme with these sort of things is the thinking that difficult problems can be made tractable by presenting a "simple" naive interface while fudging things behind the scenes. Those supposedly simple interfaces actually become complex to think about once you start asking difficult questions about correctness, error handling, and edge cases.
Funny enough, one of my attorneys taught me a lesson a long time ago around this. Simplified, she said "only and idiot claims to have lots of documents" to support their action. Sure, it's the easy/lazy way to try and intimidate people with the lowest amount of knowledge about how things work. But anyone with the slightest clue knows 1) talk is cheap, 2) you don't need a lot of docs, you just need the one that matters, and 3) if you claim to have documents, you'll eventually have to produce them, and if you can't, you look like an idiot.
Maybe put another way...don't let your mouth write checks your body can't cash.
> Reddit was knowingly ruined by google. Once google pushed reddit to the top of search results
Ehhhhh I agree and yet also disagree (it's fun though).
Yes they were ruined by being promoted by algo changes, but do I blame google directly? For me, no.
It's exactly as we stated before, it's because it was so trustworthy. Individual people's personal experience with X or Y many times with good details. That earned a lot of strong backlinks, blogs, etc. The domain became authoritative especially on esoteric searches. Then algo changes came (remember pandas?) and pushed them even further. I mean that's the point of search systems right? Get you to trustworthy information that you're looking for.
Then the money grabbers showed up.
So it's just like Harvey Dent said - either you die a trusted niche community or live long enough to see yourself become weaponized for money. He was so smart, that Harvey Dent.
I loved the idea of QNX. Got way excited about it. We were moving our optical food processor from dedicated DSPs to general purpose hardware, using 1394 (FireWire). The process isolation was awesome. The overhead of moving data through messages, not so much. In the end, we paid someone $2K to contribute isochronous mode/dma to the Linux 1394 driver and went our way with RT extensions.
It was a powerful lesson (amongst others) in what I came to call “the Law of Conservation of Ugly”. In many software problems, there’s a part that just is never going to feel elegant. You can make one part of the system elegant, which often causes the inelegance surface elsewhere in the system.
> What I'm curious is why the platforms don't adapt to how the developers have found works best?
Here's my take:
- The web was visualized as a way to publish academic documents in a hyperlinked document system. Librarians and academics live in this world. We hear the word "semantic" from them a lot.
- The visual web was visualized as a way to publish documents that had a precise look. Graphic designers live in this world. They don't care about semantics. They do care about pixel perfect layouts and cool effects.
- The web app was visualized as a way to deliver software to users with lazy, "click link to install"-like behavior. What this crowd cares about is providing server functionality to users, and other concerns like semantics or pixel perfect are often secondary.
- The single page web app is also visualized as as a way to deliver software to users with lazy, "click link to install"-like behavior. They differ from the web app group in that they try to have more server functionality right in the client. Again semantics and pixel perfect are secondary. App complexity is a big problem that this group contends with, and this is what the article discusses.
Given these different ways of visualizing the web (and I'm sure I've left a few out), it's no wonder that we're stuck with the mess that is today's web development. The right solution is a sensible runtime for app development, that doesn't force you to render UI through the DOM and doesn't make it hard for you to get access to basic things like the local file system. We've known this forever (anyone remember Flash?).
WASM feels like it might finally allow app developers to do all of the software things that native platform developers get to do easily, and with the added bonus of strong sandboxing. It's early days yet, I think the "Ruby on Rails"-moment has not yet arrived there yet, i.e. a very popular, easy way for devs to create whatever app-de-jour everyone's excited about.
Have a bunch of checkboxes at the top, one for each buzzword, each technology and all other things a 12 year old wouldn't be familiar with.
You check which you think to be familiar with and all other things unfold a short description with links to similar interactive documents.
Each section comes with 1-5 star rating for how well the reader understood your explanation.
Then you gather the data as the subjects suffer though the tutorial.
If people come from specific backgrounds further tailor the explanation for them.(Like babazoofoo for C++ developers.)
Let there be a browser extension or an API that checks (and hides) the familiar boxes for you.
I didn't say it was possible to make. It would be glorious to have. If you know all the tech involved the whole thing implodes into a one line code example.
Reading through bad setup docs is 10x more stressful when they are part of new employee onboarding.
I’ve always advocated for new employees first contributions to be fixing problems they had in these setup materials. They are coming in with fresh eyes and no context so they are the best possible reviewer
I got into quant finance 12 years ago with the mistaken idea that I was going to successfully use all these cool machine learning techniques (genetic programming! SVMs! neural networks!) to run great statistical arbitrage books.
Most machine learning techniques focus on problems where the signal is very strong, but the structure is very complex. For instance, take the problem of recognizing whether a picture is a picture of a bird. A human will do well on this task, which shows that there is very little intrinsic noise. However, the correlation of any given pixel with the class of the image is essentially 0. The "noise" is in discovering the unknown relationship between pixels and class, not in the actual output.
Noise dominates everything you will find in statistical arbitrage. R^2 of 1% are something to write home about. With this amount of noise, it's generally hard to do much better than a linear regression. Any model complexity has to come from integrating over latent parameters or manual feature engineering, the rest will overfit.
I think Geoffrey Hinton said that statistics and machine learning are really the same thing, but since we have two different names for it, we might as well call machine learning everything that focuses on dealing with problems with a complex structure and low noise, and statistics everything that focuses on dealing with problems with a large amount of noise. I like this distinction, and I did end up picking up a lot of statistics working in this field.
I'll regularly get emails from friends who tried some machine learning technique on some dataset and found promising results. As the article points out, these generally don't hold up. Accounting for every source of bias in a backtest is an art. The most common mistake is to assume that you can observe the relative price of two stocks at the close, and trade at that price. Many pairs trading strategies appear to work if you make this assumption (which tends to be the case if all you have are daily bars), but they really do not. Others include: assuming transaction costs will be the same on average (they won't, your strategy likely detects opportunities at time were the spread is very large and prices are bad), assuming index memberships don't change (they do and that creates selection bias), assuming you can short anything (stocks can be hard to short or have high borrowing costs), etc.
In general, statistical arbitrage isn't machine learning bound(1), and it is not a data mining endeavor. Understanding the latent market dynamics you are trying to capitalize on, finding new data feeds that provide valuable information, carefully building out a model to test your hypothesis, deriving a sound trading strategy from that model is how it works.
(1: this isn't always true. For instance, analyzing news with NLP, or using computer vision to estimate crop outputs from satellite imagery can make use of machine learning techniques to yield useful, tradeable signals. My comment mostly focuses on machine learning applied to price information. )
> There is a prevalent culture of expecting users to not make mistakes.
I think the older of us C/C++ programmers come from no-safety languages like assembly language. That doesn't mean that all of us are "macho programmers" (as I was called here once). C's weak typing and compilers emitting warnings give a false sense of security which is tricky to deal with.
The statement you make is not entirely correct. The more correct statement is that there is a prevalent culture of expecting users to find strategies to avoid mistakes. We are engineers. We do what we need with what we have, and we did what we had to with what we had.
When you program with totally unsafe languages, you develop more strategies than just relying on a type checker and borrow checker: RAII, "crash early", TDD, completion-compatible naming conventions, even syntax highlighting (coloring octal numbers differently)...
BUT. the cultural characteristics of the programmers are only one-quarter of the story. The bigger part is about company culture, and more specifically the availability of programmers. You won't promote safer languages and safer practice by convincing programmers that it has zero impact on performance. It's the companies that you need to convince that the safe alternatives are as productive [1] as the less safe alternatives.
I've had two interactions with Wendy's AI drive-through, and the first time I was pleasantly surprised, but the second time it would not stop suggesting add-ons after every single thing I said. It was comically pushy.
A human would have pretty quickly picked up on my increasingly exasperated "no, thanks" and stopped doing it, but the AI was completely blind to my growing frustration, following the upsell directive without any thought.
It reminded me of when I worked in retail as a kid and we were required to ask if they needed any batteries at checkout, even if they were just buying batteries. I learned pretty quickly to ignore that mandate in appropriate situations (unless the manager was around).
Makes me wonder how often employees are smart enough to ignore hard rules mandated by far-off management that would hurt the company's reputation if they were actually followed rigidly. AI isn't going to have that kind of sensitivity to subtle clues in human interaction for some time, I suspect.
It's easy to do both at project level and globally, and these days there are quite few legit packages that don't work without them. For those that don't, you can create a separate installation script to your project that cds into that folder and runs their install-script.
I know this isn't a silver bullet solution to supply chain attakcs, but, so far it has been effective against many attacks through npm.
Very cool stuff, text rendering is a really hairy problem.
I also got nerd sniped by Sebastian Lague's recent video on text rendering [0] (also linked to in the article) and started writing my own GPU glyph rasterizer.
In the video, Lague makes a key observation: most curves in fonts (at least for Latin alphabet) are monotonic. Monotonic Bezier curves are contained within the bounding box of its end points (applies to any monotonic curve, not just Bezier). The curves that are not monotonic are very easy to split by solving the zeros of the derivative (linear equation) and then split the curve at that point. This is also where Lague went astray and attempted a complex procedure using geometric invariants, when it's trivially easy to split Beziers using de Casteljau's algorithm as described in [1]. It made for entertaining video content but I was yelling at the screen for him to open Pomax's Bezier curve primer [1] and just get on with it.
For monotonic curves, it is computationally easy to solve the winding number for any pixel outside the bounding box of the curve. It's +1 if the pixel is to the right or below the bounding box, -1 if left or above and 0 if outside of the "plus sign" shaped region off to the diagonals.
Further more, this can be expanded to solving the winding number for an entire axis aligned box. This can be done for an entire GPU warp (32 to 64 threads): each thread in a warp looks at one curve and checks if the winding number is the same for the whole warp and accumulate, if not, set a a bit that this curve needs to be evaluated per thread.
In this way, very few pixels actually need to solve the quadratic equation for a curve in the contour.
There's still one optimization I haven't done: solving the quadratic equation in for 2x2 pixel quads. I solve both vertical and horizontal winding number for good robustness of horizontal and vertical lines. But the solution for the horizontal quadratic for a pixel and the pixel below it is the same +/- 1, and ditto for vertical. So you can solve the quadratic for two curves (a square root and a division, expensive arithmetic ops) for the price of one if you do it for 2x2 quads and use warp level swap to exchange the results and add or subtract 1. This can only be done in orthographic projection without rotation, but the rest of the method also works in with perspective, rotation and skew.
For a bit of added robustness, Jim Blinn's "How to solve a quadratic equation?" [2] can be used to get rid of some pesky numerical instability.
I'm not quite done yet, and I've only got a rasterizer, not the other parts you need for a text rendering system (font file i/o, text shaping etc).
But the results are promising: I started at 250 ms per frame at a 4k rendering of a '@' character with 80 quadratic Bezier curves, evaluating each curve at each pixel, but I got down to 15 ms per frame by applying the warp vs. monotonic bounding box optimizations.
These numbers are not very impressive because they are measured on a 10 year old integrated laptop GPU. It's so much faster on a discrete gaming GPU that I could stop optimizing here if it was my target HW. But it's already fast enough for real time in practical use on the laptop because I was drawing an entire screen sized glyphs for the benchmark.
Directionally I think this is right. Most LLM usage at scale tends to be filling the gaps between two hardened interfaces. The reliability comes not from the LLM inference and generation but the interfaces themselves only allowing certain configuration to work with them.
LLM output is often coerced back into something more deterministic such as types, or DB primary keys. The value of the LLM is determined by how well your existing code and tools model the data, logic, and actions of your domain.
In some ways I view LLMs today a bit like 3D printers, both in terms of hype and in terms of utility. They excel at quickly connecting parts similar to rapid prototyping with 3d printing parts. For reliability and scale you want either the LLM or an engineer to replace the printed/inferred connector with something durable and deterministic (metal/code) that is cheap and fast to run at scale.
Additionally, there was a minute during the 3D printer Gardner hype cycle where there were notions that we would all just print substantial amounts of consumer goods when the reality is the high utility use case are much more narrow. There is a corollary here to LLM usage. While LLMs are extremely useful we cannot rely on LLMs to generate or infer our entire operational reality or even engage meaningfully with it without some sort of pre-existing digital modeling as an anchor.
People now days don't understand how genius MS was in the 90s.
Their strategy and execution was insanely good, and I doubt we'll ever see anything so comprehensive ever again.
1. Clear mission statement: A PC in very house.
2. A nationwide training + certification program for software engineers and system admins across all of Microsoft's tooling
3. Programming lessons in schools and community centers across the country to ensure kids got started using MS tooling first
4. Their developer operations divisions was an insane powerhouse, they had an army of in house technical writers creating some of the best documentation that has ever existed. Microsoft contracted out to real software engineering companies to create fully fledged demo apps to show off new technologies, these weren't hello world sample apps, they were real applications that had months of effort and testing put into them.
5. Because the internet wasn't a distribution platform yet, Microsoft mailed out huge binders of physical CDs with sample code, documentation, and dev editions of all their software.
6. Microsoft hired the top technical writers to write books on the top MS software stacks and SDKs.
7. Their internal test labs had thousands upon thousands of manual testers whose job was to run through manual tests of all the most popular software, dating back a decade+, ensuring it kept working with each new build of Windows.
8. Microsoft pressed PC OEMs to lower prices again and again. MS also put their weight behind standards like AC'97 to further drop costs.
9. Microsoft innovated relentlessly, from online gaming to smart TVs to tablets. Microsoft was an early entrant in a ton of fields. The first Windows tablet PC was in 1991! Microsoft tried to make smart TVs a thing before there was any content, or even wide spread internet adoption (oops). They created some of the first e-readers, the first multimedia PDAs, the first smart infotainment systems, and so on and so forth.
And they did all this with a far leaner team than what they have now!
(IIRC the Windows CE kernel team was less than a dozen people!)
Honestly it follows the design of the rest of the language. An incomplete list:
1. They wrote it to replace C++ instead of Objective-C. This is obvious from hearing Lattner speak, he always compares it to C++. Which makes sense, he dealt with C++ every day, since he is a compiler writer. This language does not actually address the problems of Objective-C from a user-perspective. They designed it to address the problems of C++ from a user-perspective, and the problems of Objective-C from a compiler's perspective. The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).
2. They designed the language in complete isolation, to the point that most people at Apple heard of its existence the same day as the rest of us. They gave Swift the iPad treatment. Instead of leaning on the largest collection of Objective-C experts and dogfooding this for things like ergonomics, they just announced one day publicly that this was Apple's new language. Then proceeded to make backwards-incompatible changes for 5 years.
3. They took the opposite approach of Objective-C, designing a language around "abstract principles" vs. practical app decisions. This meant that the second they actually started working on a UI framework for Swift (the theoretical point of an Objective-C successor), 5 years after Swift was announced, they immediately had to add huge language features (view builders), since the language was not actually designed for this use case.
4. They ignored the existing community's culture (dynamic dispatch, focus on frameworks vs. language features, etc.) and just said "we are a type obsessed community now". You could tell a year in that the conversation had shifted from how to make interesting animations to how to make JSON parsers type-check correctly. In the process they created a situation where they spent years working on silly things like renaming all the Foundation framework methods to be more "Swifty" instead of...
5. Actually addressing the clearly lacking parts of Objective-C with simple iterative improvements which could have dramatically simplified and improved AppKit and UIKit. 9 years ago I was wishing they'd just add async/await to ObjC so that we could get modern async versions of animation functions in AppKit and UIKit instead of the incredibly error-prone chained didFinish:completionHandler: versions of animation methods. Instead, this was delayed until 2021 while we futzed about with half a dozen other academic concerns. The vast majority of bugs I find in apps from a user perspective are from improper reasoning about async/await, not null dereferences. Instead the entire ecosystem was changed to prevent nil from existing and under the false promise of some sort of incredible performance enhancement, despite the fact that all the frameworks were still written in ObjC, so even if your entire app was written in Swift it wouldn't really make that much of a difference in your performance.
6. They were initially obsessed with "taking over the world" instead of being a great replacement for the actual language they were replacing. You can see this from the early marketing and interviews. They literally billed it as "everything from scripting to systems programming," which generally speaking should always be a red flag, but makes a lot of sense given that the authors did not have a lot of experience with anything other than systems programming and thus figured "everything else" was probably simple. This is not an assumption, he even mentions in his ATP interview that he believes that once they added string interpolation they'd probably convert the "script writers".
The list goes on and on. The reality is that this was a failure in management, not language design though. The restraint should have come from above, a clear mission statement of what the point of this huge time-sink of a transition was for. Instead there was some vague general notion that "our ecosystem is old", and then zero responsibility or care was taken under the understanding that you are more or less going to force people to switch. This isn't some open source group releasing a new language and it competing fairly in the market (like, say, Rust for example). No, this was the platform vendor declaring this is the future, which IMO raises the bar on the care that should be taken.
I suppose the ironic thing is that the vast majority of apps are just written in UnityScript or C++ or whatever, since most the AppStore is actually games and not utility apps written in the official platform language/frameworks, so perhaps at the end of the day ObjC vs. Swift doesn't even matter.
Easy Remote Job Opportunity! Pay is $1/hr. Perfect for retirees, disabled, and even kids! Requirements: have an eye ball. Duties: whatever you want, except when this device beeps, look into the camera.
Is Amazon's Mechanical Turk or whatever paying people to solve captchas still a thing?
Yeah. The problem with splitting up Google is that Google products, taken in isolation, are themselves keys to preventing other monopolies.
Split off Android to swim on its own and we get an iPhone monopoly. Split off Workspace and we go back to the days of MSOffice's monopoly. Splitting out Chrome essentially kills the World Wide Web as an application platform as no one else wants to support it. Cloud would probably stand alone competitively, but if not it's going to be an Amazon monopoly.
Basically Google is strong in search and ads (also AI, though that isn't a revenue center yet and there's lots of competition) and second place in everything else. IMHO it's very hard[1] to make a pro-consumer argument behind killing off all those second place products.
[1] And yeah, they pay my salary, but I work on open source stuff and know nothing about corporate governance.
As you seem to suggest in your last para, Law is useful, but also insufficient. It is often too rigid and cold. It is easily subverted, selectively applied or conveniently re-interpreted by whoever has more power. Two cultures can have the same constitution yet produce greatly divergent results. You can see this in the failed democracy revolutions of the last 20 years. You can see this in comparing the state of American democracy today with the state of America democracy in the 60s. Both were tumultuous times, but the culture is far more selfist now than then.
I'm a socialist, but I'd take a socialist culture (defined by people genuinely giving the common good priority) on top of a free-market capitalist economy and political system over a individualist/selfist culture on top of a socialist economy and political system any day. The former will actively work to justly distribute resources and opportunity despite the dark tendencies of capitalism and free markets, while the latter will quickly devolve into a farce of socialism.
Progress is ultimately about the progress of Culture.
"Belief, like fear or love, is a force to be understood as we understand the theory of relativity, and principles of uncertainty. Phenomena that determine the course of our lives. Yesterday, my life was headed in one direction. Today, it is headed in another. Yesterday, I believe I would never have done what I did today. These forces that often remake time and space, they can shape and alter who we imagine ourselves to be, begin long before we are born, and continue after we perish. Our lives and our choices, like quantum trajectories, are understood moment to moment, at each point of intersection, each encounter, suggest a new potential direction."
When classifying living beings, a single classification criterion is not good enough.
One classification criterion is descendance from a common ancestor, i.e. cladistic classification.
In many cases this is the most useful classification criterion, because the living beings grouped in a class defined by having a common ancestor share a lot of characteristics inherited from their common ancestor, so when using a name that is applied to that class of living beings, the name provides a lot of information about any member.
However there are at least 2 reasons which complicate such a cladistic classification.
One is that the graph of the evolution of living beings is not strictly a tree, because there are hybridization events that merge branches.
Sometimes the branches that are merged are closely related, e.g. between different species of felids, so they do not change the overall aspect of the tree. However there are also merges between extremely distant branches, like the symbiosis event between some blue-green alga (Cyanobacteria) and some unicellular eukaryote, which has created the ancestors of all eukaryotes that are oxygenic phototrophs, including the green plants.
Moreover, there have been additional symbiosis events that have merged additional eukaryote branches and which have created the ancestors of other eukaryote phototrophs, e.g. the ancestor of brown algae.
After any such hybridization event, there is the question how you should classify the descendants of the hybrid ancestor, as belonging to one branch or to the other branch that have been merged.
For some purposes it is more useful to classify all eukaryote phototrophs based on the branch that has provided the main nucleus of the hybrid cell, and this is the most frequently used classification.
For other purposes it is more useful to group together all the living beings that are oxygenic phototrophs, including various kinds of eukaryotes and also the blue-green algae, and divide them based on the evolution tree of their light-capturing organelles, i.e. the chloroplasts.
This is also a valid cladistic classification, because all oxygenic phototrophs, both eukaryotes and prokaryotes, are the descendants of a single common ancestor, some ancient phototrophic bacteria that has switched from oxidizing manganese using light energy, to oxidizing water, which releases free dioxygen.
Even when there are no branch merges due to hybridization, there remains the problem that in the set of descendants from a single ancestor there are some that are conservative, so they still resemble a lot with their ancestor, and some that are progressive, which may have changed a lot, so they no longer resemble with their ancestor.
In this case, using the name of the entire group provides very little information, because most characteristics that were valid for the ancestor may be completely inapplicable to the subgroups that have become different. In such a case, defining and using a name for the paraphiletic set of subgroups that remains after excluding the subgroups that have evolved divergently may be more useful in practice than using only names based on a cladistic classification. For instance the use of the word "fish" with its traditional paraphiletic meaning, i.e. "vertebrate that is not a tetrapod", is very useful and including tetrapods in "fishes" is stupid, because that would make "fish" and "vertebrate" synonymous and it would require the frequent use of the expression "fishes that are not tetrapods", whenever something is said that is correct only for vertebrates that are not tetrapods, or of the expression "bony fishes that are not tetrapods", for things valid for bony fishes, but not for tetrapods.
While in many contexts it is very useful to know that both fungi and animals are opisthokonts, and there are a few facts that apply to all opisthokonts, regardless whether they are fungi, animals or other opisthokonts more closely related to fungi or more closely related to animals, the number of cases when it is much more important to distinguish fungi from animals is much greater than the number of cases when their common ancestry is relevant.
Animals are multicellular eukaryotes that have retained the primitive lifestyle of the eukaryotes, i.e. feeding by ingesting other living beings, which is made possible by cell motility.
Fungi are multicellular eukaryotes that have abandoned the primitive lifestyle of the eukaryotes, and which have reverted to a lifestyle similar to that of heterotrophic bacteria, just with a different topology of the interface between cells and environment (i.e. with a branched multicellular mycelium instead of multiple small separate cells).
This change in lifestyle has been caused by the transition to a terrestrial life, which has been accomplished with a thick cell wall (of chitin) for avoiding dehydration, which has suppressed cell motility, making impossible the ingestion of other living beings, the same as for bacteria. Moreover the transition to a bacterial lifestyle has also been enabled by several lateral gene transfers from some bacteria, which have provided some additional metabolic pathways that enable fungi to survive when feeding with simpler substances than required by most eukaryotes, including animals.
So even from a cladistic point of view, fungi have some additional bacterial ancestors for their DNA, besides the common opisthokont ancestor that they share with the animals.
Animals are unique among eukaryotes, because all other multicellular eukaryotes have abandoned the primitive lifestyle of eukaryotes, by taking the lifestyles of either heterotrophic or phototrophic bacteria. However for both other kinds of lifestyle changes there are multiple examples, i.e. besides true fungi that are opisthokonts there are several other groups of fungous eukaryotes that are not opisthokonts, the best known being the Oomycetes. There are also bacteria with fungal lifestyle and topology, e.g. actinomycetes a.k.a. Actinobacteria.
If we will ever explore other planets with life, those living beings will not have a common ancestor with the living beings from our planet, but nevertheless it will still be possible to classify them based on their lifestyle in about a half of dozen groups that would be analogous to animals (multicellular living beings that feed by ingestion, so they must be mobile or they must have at least some mobile parts), fungi (multicellular beings that grow into their food, absorbing it after external digestion), oxygenic phototrophs, anoxygenic phototrophs, chemoautotrophs, unicellular equivalents of animals and fungi, like protozoa and heterotrophic bacteria, viruses.
These differences in lifestyles are more important in most contexts than the descendance from a common ancestor.
So while it is useful to have the name Opisthokonta for the contexts where fungi and animals and their close relatives must be included, it is much more frequent to need to speak separately about fungi and other fungous organisms on one hand, and animals on the other hand.
I agree that the term "kingdom" is obsolete when used in the context of a cladistic classification of the living beings.
Perhaps it should be retained for a non-cladistic classification of the living beings, based on the few fundamental lifestyles that are possible, and which would remain valid even for extraterrestrial living beings.
Years ago when my daughter was around 5 I was showing her a raspberry Pi zero I had just picked up. I told her - years ago before Daddy was your age a computer like this used to be as big as a house. Her response was - “houses were that small?”
A killer app in the like-an-iphone context is something that provides obvious value - if not outright delight - to a huge demographic.
Coding doesn't do that, because the demographic interested in coding is not huge compared to the rest of the population.
Chatbots don't do it either because they're too unreliable. I never know if I'm going to get a recommendation for something the LLM hallucinated and doesn't exist.
There's also huge cultural resistance to AI. The iPhone was perceived as an enabling device. AI is perceived as a noisy, low-reliability, intrusive, immoral, disabling technology that is stealing work from talented people and replacing it with work of much lower quality.
It's debatable how many of those perceptions are accurate, but it's not debatable the perceptions exist.
In fact the way OpenAI, Anthropic, and the others have handled this is a masterclass in self-harming PR. It's been an unqualified cultural disaster.
So any killer app has to overcome that reputational damage. Currently I don't think anything does that in a way that works for the great mass of non-technical non-niche users.
Also - the iPhone was essentially a repackaging exercise. It took the Mac+Phone+Camera+iPod - all familiar concepts - and built them into a single pocket-sized device. The novelty was in the integration and miniaturisation.
AI is not an established technology. It's the poster child for a tech project with amorphous affordances and no clear roadmap in permanent beta. A lot of the resistance comes from its incomprehensibility. Plenty of people are making a lot of money from promises that will likely never materialise.
To most people there is no clear positive perception of what it is, what it does, or what specifically it can do for them - just a worry that it will probably make them redundant, or at least less valuable.
> because it's hard enough that people don't try. and then they settle for rust. this is what i mean by "rust sucks the air out of the room".
I think it's the opposite. Rust made memory safety without garbage collection happen (without an unusably long list of caveats like Ada or D) and showed that it was possible, there's far more interest in it now post-Rust (e.g. Linear Haskell, Zig's very existence, the C++ efforts with safety profiles etc.) than pre-Rust. In a world without Rust I don't think we'd be seeing more and better memory-safe non-GC languages, we'd just see that area not being worked on at all.
> however, its clearly not impossible, for example this authors incomplete example:
Incomplete examples are exactly what I'd expect to see if it was impossible. That kind of bolt-on checker is exactly the sort of thing people have tried for decades to make work for C, that has consistently failed. And even if that project was "complete", the hard part isn't the language spec, it's getting a critical mass of programmers and tooling.
> what if it's not ten years, what if it could be six months?
If the better post-Rust project hasn't appeared in the past 15 years, why should we believe it will suddenly appear in the next six months? And given that it's taken Rust ~15 years to go from being a promising project to being adopted in the kernel, even if there was a project now that was as promising as the Rust of 15 years ago, why should we think the kernel would be willing to adopt it so much more quickly?
And even if that did happen, how big is the potential benefit? I think most fans of Rust or Zig or any other language in this space would agree that the difference between C and any of them is much bigger than the difference between these languages.
> youre risking getting trapped in a local minimum.
It's a risk, sure. I think it's much smaller than the risk of staying with C forever because you were waiting for some vaporware better language to come along.
* You should only be accountable for what you are responsible for.
* You should only be responsible for what you have control of.
Bad managers hate this structure because in makes them accountable for themselves and their subordinates and prevents deflection of blame to low level employees.