I transitioned from a "normal" keyboard to a Glove80 last year. I immediately didn't like using the standard layout, so I switched to Colemak-DH which I've found pretty comfortable.
The thumb clusters are definitely a pain, I find myself only really using the 3 closest buttons one each side. Even then, the closest button on the upper row requires a stretch that gets uncomfortable after a while. It hasn't been bad enough for me to consider trying other keyboards though--the prices are too high for me to feel comfortable buying something I might not like.
To anyone that switches to a split keyboard, I strongly recommend also getting a trackball mouse (I use a Ploopy Adept). It lets you center the keyboard in front of you without needing to stretch too far while manipulating the mouse.
thanks, although i’m a member already. it’s not really resources that’s the issue. i’ve read Julius Smith’s books in the past etc. in the rust audio discord etc.
i just have a mental block similar to the one i had with rust. avoided learning it for a long while until i made a decision to finally to do it.
i just keep avoiding making the same decision here for some reason. not sure why. probably the old “it’s going to be really hard” thing i had with rust (which turned out to be rubbish, it just took time and repeating stuff over and over and learning from mistakes over and over).
For 25-player lobbies and some persistent state, I can't help but feel it's over-engineered. Certainly many games (including popular mmos) have been made with much less. This seems like they were engineering defensively against the potential of massive success, which ends up slowing everything down at the point where you need to move most quickly (pre-launch game design iteration and testing).
I don't mean to be reductive though, clearly a lot of work has gone into this architecture and they know their problems better than I do. Props to the devs for seeing it through and getting the game launched!
I've thought a bit about how I would add text-based procedural generation to my virtual world engine (I would love to have an "edit a book to change the world" experience, like Myst's linking books). The hard part is the composition of smaller things into larger concepts, since you have limited data.
For example, developers often devise a system of generic water tiles that they can fit together to form any sort of river. But how do you algorithmically go from the text "I want a river that starts large, gets narrow, then turns East" to a valid map layout? The water tiles and the ways they fit together are specific to the world, so there's nowhere near enough data to train a model. You could hardcode procedural logic based on key phrases ("river" + "start large" + "get narrow" + "turn east"), but that's a lot of work for a very limited implementation.
The most promising idea I've come up with is a node-based intermediate representation. You could use general data on rivers to determine how they're shaped, then use that to translate the user's prompt into a series of nodes on a map. Then, you just need to hand-implement a system to iterate the nodes and place tiles based on whether the next node is straight ahead, turning, etc. This approach could generalize to other concepts like mountains. At that point, it's just about making the text-based stage sufficiently predictive of what the user wants instead of just being a Logo-like.
The incremental-procedural generation that impresses me most is long distance connections.
We want worlds that are too big to be completely stored, so there's conceptually a tree, where high-level features (continents? cities?) exist on the top, and low-level features (the colour of the fur of a specific citizen's pet?) towards the bottom. To get the same tree, no matter the order you expand it, is the key desirable feature of incremental procedural generation. Because then you get the feeling that it's in a sense "all there", and not just made up as you go along.
But making a tree like that is easy. The hard part is the long-distance connections. If you step down the tree to a continent, to a country, to a city, to a person... and you want that guy to have an aunt. Possibly on another continent. And you want to make it so that if you restarted with the same seed, and stepped down to the same aunt, she would still have that same nephew.
Most incremental procedural games have very little in the way of long distance connections, and it makes them lose their lustre quickly. What does it matter if there are 10000 procedurally generated cities in your world, if they don't have anything to do with each other?
A few games get by this by not being incremental, but simulating all that hard stuff up front. That's how they have aunts in Dwarf Fortress. But while it's impressive, it doesn't scale up. I think the way forward must be to figure out how to embellish (that is, expanding nodes in a seed-generated tree) just what you need, when you need it.
Language models are literally good at embellishing, but then you feel that lack of consistency and coherence, because there's no seed-generated tree at the root of it.
First assign the number of potential aunts and nephews in each continent, city, etc top down in your tree (ensuring that the numbers are equal at the top and that you can reversibly assign a compact integer to each aunt and to each nephew). Then you need an invertible pseudorandom permutation of the right size P, i.e P(P^-1(x))=x for 0<=x<N. For nephew n the aunt is a=P(n) and you can walk down the tree to generate the details of the entity which is aunt a. If you start with the aunt you just use P^-1.
Cryptography has already developed families of functions P. For N=2^128, AES is such a family. For non power of two N, you want to look at "format-preserving encryption". For example you can use a Feistel network to permute the next larger power of two and simply iterate that permutation until you get a result in range.
I think you could extend this primitive to nonuniform graphs (aunt more likely in the same city) and more kinds of relationships. You can also use rejection sampling to condition the relationships further and probably to deal with the top down population assignment being approximate.
All that said, I think procedural generation currently runs out of creativity long before you would run out of memory to store explicit relationship indices. I think it's a lot more hopeful to get more creativity out of generative AI while keeping things grounded by explicitly representing the world, even if it "doesn't scale" to creating trillions of galaxies.
Thanks for the suggestions! I did indeed try to employ a couple of "odd" cryptographic primitives when I played around with this.
Yes, it's probably a good idea to keep track of in- and outflows, in higher nodes. The higher node could have a parameter saying something about the number of aunt-nephew relationships (or any kind of relationship) that cross continent borders. Those parameters, too, could get generated on demand.
Popular games switching from pub servers to matchmade 5v5/6v6 bums me out. It feels like it was just done to chase esports money, and it's such a worse casual experience. Way more stress, way less community.
I think this shift is a direct contributor to increased toxicity in multiplayer gaming, honestly. The server communities I remember were like your favorite pub where all the regulars knew each other and you all just liked hanging out. Occasionally new people would drop in, and sometimes they'd become part of things but usually they at least behaved, and if they didn't they were booted off an IP banned.
Man, the negative sentiment on this site towards C++ is pretty alien to me.
C++ is (I'm guessing) currently holding up an order of magnitude more applications than whatever you think is better than it. Clearly it has upsides, so it's baffling to me when people greet attempts to reduce the downsides with either "This doesn't make sense, just scrap it for something else" or endless nitpicks about the approach chosen.
A smart guy has decided to put his time towards improving a hugely influential language. That seems like an uncontroversial positive to me, and I welcome any useful results.
I think not many people ever got properly introduced to C++.
Some probably only saw it during their format education, and it's most likely been taught like C with classes and/or they had to use for stupid things.
There are also many forms of C++, there is your "old and regular" C++, game dev C++, there is modern C++, then there is "I only know C, but sure I can write C++" C++.
Pulling dependencies is not as simple as `npm i boost`.
Communities are small, segmented and not welcoming to newbies.
Absolute madness with build tools. I've never worked with CMake myself until last year, and I haven't so frustrated.
As for how many applications using C++ that's not really an argument — C++ was literally the only choice for many of them at that time.
Not really an argument? The C/C++ ecosystem is so well established that even today you don't have a choice in a lot of contexts.
If that doesn't show the power of a language, I don't know what can. I don't really think the community is toxic or unwelcoming, more like people are too used to ctrl+c ctrl+v their way out of problems. Nothing against that, I'm the same way, but it really is a matter of taking your time with the language. I've honestly learned to applaud most rust fanatics because of this, at least they are passionate about learning athe language inside/out.
Where is C++ the only choice? Gamedev seems like the closest—as far as I know, most AAA games aren’t using C, but I think maybe many are using C# via Unity (I’m not familiar with the gamedev space)? In the embedded world, C still reigns supreme. Maybe in the world of high frequency trading? But these seem to be fewer and fewer and Rust seems increasingly likely to be a contender in these domains.
Game engines are one example, there isn't a robust-enough language other than C++ that has the ability to build some powerful abstractions while also giving you the tools to manage hardware more directly. Note that Unity uses C# as a scripting language, but under the hood the core engine is all in C++.
I agree that these things all use C++, but the parent’s claim was that there are areas (industries?) where C++ has no alternatives. Drivers and high performance systems can all make use of C, Rust, Ada, etc as well as C++ depending on requirements.
I'm sorry, but Rust still has ways to go before established industries believe on it enough to drop C/C++.
I use C/C++ interchangeably, since extending an existing C codebase won't be a problem.
Most aren't willing to go as far as Ada either, the reason why that even in military endeavors, C/C++ is used as a more familiar "face".
I think it's pretty easy to forget the foothold these languages have, and their flexibility. We all have seen the perseverance of COBOL.
My original comment specifically said that older languages have staying power, and that Rust will have to mature a bit to become the language of choice for new projects in these niches. But even still, I wasn’t talking about dropping C++ for Rust—I only mentioned Rust as an alternate option.
No problem, I just don't consider it as an alternative for the aforementioned concerns. Nonetheless, I wasn't trying to point fingers at Rust, more of a pragmatic approach to modernizing a code infrastructure.
I echo the same sentiment here but mainly for C language as If my memory serves me right, I didn't have any encounter with C++.
My experience was negative overall and traumatic to some extent and I wouldn't rule out that some who were first exposed to the language in college or even HS share the same aversion and feel scarred and haunted forever by that Blue Ghost.
My theory is that the Internet generally hates multi-paradigm languages (Scala!). Key phrase there is “the Internet,” which is a euphemism for “people who write in the comment section.” They aren’t a representative sample, just a vocal one.
There are plenty of good reasons to hate on C++, but that’s conflated with a lot of “real hacker” signaling.
Most takes on C++ lament that it isn’t C, or is too complex. Rust is a quality replacement for C++, but it doesn’t succeed on either of those counts either.
All new code I write is in Rust or Haskell. If I need to do something weird, like with DynamoRIO, C++ it is.
Python and Ruby were considered multiparadigm languages once, and the internet didn't hate them at all.
I think "the internet" tends to complain about languages which have accrued complexity over time, and contain parts that are now considered bad but still lurk in the shadows tripping you up.
Scala, C++, perl and others seem to fall into this category.
I think the Internet doesn’t like languages that don’t have good, strong opinions that will help you find a successful path. C++ just says “here’s every feature we’ve ever heard of; good luck!” whereas Go has strong opinions about how you should build software and while they might change people, they really do put you on the right track. Rust is closer to Rust, but it has so many features that it still takes some time to figure out the happy path—fortunately, the Rust community makes up for a lot of that with excellent documentation (e.g., the Rust Book) and helpful, friendly forums.
There's a very real desire for things to just be laid out for you, and much less regard for whether the workflow/tradeoffs associated with that beget quality software in the process.
It is hard to not see this as an army of advanced beginners unconsciously trying to avoid doing the work required to level up.
> It is hard to not see this as an army of advanced beginners unconsciously trying to avoid doing the work required to level up.
I don't think it's hard at all. "I don't want to have to manually manage dependencies" isn't "avoiding leveling up", it's just not doing pointless work that computers are good at doing. Similarly, there's negative value in requiring every/most project to script its own bespoke build system (CMake) when 99% of projects can fit a mold (and then there are efficiencies when 99% of those projects fit that mold--e.g., trivial cross compilation). None of this stuff is meaningfully related to "leveling up". Similarly, bombarding people with language primitives that are almost always footguns (e.g., inheritance) isn't really helping anyone "level up" except to know not to use those features.
In the C++ world, you have an army of people who think they're experts because they've navigated all of these problems to find something that sort of works, but in practice they're far less productive and often don't know what they're missing out on from the rest of the industry. I would rather have people who are productive but don't pretend they're experts.
My problem with C++ is that, with all its complexity, it's still unsafe. Efforts to add safety to it as a,library (unique_ptr, etc) are commendable and useful, but they cannot be comprehensive, because the language's design, especially the early decisions, resists them.
Lack of modularity and the fact that everything is treated like a single source file (via #include) adds interesting ways of unexpected interactions.
“Holding up an order of magnitude more applications … clearly it has upsides”.
It’s holding up lots of applications because better languages didn’t exist at the time those projects were started, and it’s rarely feasible to switch languages. Specifically, a lot of people look to Rust to unseat C++ for new applications, but it will take a while for Rust to mature with respect to libraries (e.g., game engines) and mindshare in those industries. But even then, old languages have tons of staying power by virtue of age.
I'm not super familiar with Rust, do you have SSE/AVX intrinsics (others?)? Can you write assembly embedded in Rust code? How does rust stack in terms of performance?
Rust's std::simd is the portable abstraction but it is so far only available in nightly Rust, in principle you would be able to write code that does SIMD on whatever hardware (ARM, x86-64, whatever) is targeted including AVX.
Yes, Rust has inline assembly in roughly the same way you'd be used to with C or C++
The Benchmarks Game has a bunch of benchmarks, and while it's probably significant that the Python programs are routinely orders of magnitude slower than say C, we likely shouldn't read too much into whether somebody scraped a few more milliseconds off the best time for one of the benchmarks listed.
Portable abstractions for SIMD aren't very useful, because if you're writing SIMD you want performance, and the things it abstracts over (specific SIMD capabilities and weird performance quirks of different instructions) mean the results of using it aren't predictable.
It doesn't feel much different to any other language that people have been burned by (e.g. Javascript). I have some bad C++ experienced, and I know enough programmers I respect who stick to C over C++.
Of course it's difficult to equate that kind of advice/feedback to negative comments on HN. But often, there's some kernel of truth there. So while there's some merit to C++, the criticism can equally be valid. And keeping the underlying complexity of C++ (similar to how Carbon will) might not meaningfully simplify development).
I used to work for a company whose product was a fairly large satellite control center software system (and we could supply hardware if needed), written in C++. It's used for a lot of commercial fleets. For example, back in the 2000s, when I worked on it, it was used for CDRADIO/Sirius's fleet. (I don't know if it's still used for SiriusXM's Sirius satellites, if any.)
I liked C++ in some ways, but as a whole, I think, C++ didn't reduce -- and may have increased -- the complexity of our software compared to the complexity of an equivalent C implementation. (I'm talking about the complexity of the software itself, not the complexities of the tasks it was doing.) The distribution of the complexity in the code would just have been different between the two implementations. IMHO.
So I mostly stick with C or other non-C++ languages now. Of course, C++ has expanded and changed greatly since then.
I remember really enjoying C++ circa 2012, but I really hated scripting my own build tooling via CMake or manually wrangling dependencies. This was the stuff that pushed me out of C++. Fortunately, Go had just hit 1.0 and it fixed many of the problems out of the gate, and I didn’t really need the performance that C++ offered.
I don't think that Redis qualifies as a large software system in the modern standard. In fact, it's popular because it's simple, small and understandable compared to its competitors which comprises of many millions of lines of code.
Redis is ~200K lines of C code, so it's relevant project-size-wise, but - redis was started in C, right? It's not like developers now have the option to "go C++" without a company-wide decision.
Still, if could quote Redis developers making the GP's claim, that would count.
Some software for storage products is C. I imagine other conservative industries could be similar.
But I have also heard that although the senior devs would prefer to stick to C for technical reasons, they have started to use/embrace C++ for hiring reasons (since many grads are taught C++). So I wouldn't be surprised if finding "pure" C codebases/ones without C++ is getting harder. The eco-system around C++ (e.g. test frameworks) certainly seemed healthier. It's been a few years though, so I might not have a good picture any more.
On the top of my head, for large open source C codebases that are not system/kernel/controller I guess Python, Ruby and Redis would be good examples.
I do myself fall in the category of people sticking to C over C++. I used to love C++ back until c++03, then completely hated the whole "new generation c++" orientation that started with c++11 and pretty much decided to not touch c++ anymore.
I think that "large" really needs to be defined here. CPython is 350kloc of C. I have seen C++ codebases with more source files than there are lines of code in that. Just the "base" module of Qt is ~4million loc
I came up with the following guess many years ago:
There are people who are totally unproductive at C++ and find it scary, mainly because they are not very exposed to it. They assume everybody is as uncomfortable and unproductive with it as they are. Nay, it is impossible for anyone to be productive with it, simply because they aren't. They will attack evidence that somebody has done well with it, because it is some defense of ego for them.
There are other complaints about C++ beyond this of course, with validity, and many of them from old timers and people who bitterly complained about it for decades from a place of knowledge and experience. People have been complaining about ugliness of C++ for longer than my own career. However, as more and more people come up in the post-C++-as-fashionable era, I think this above theory is more and more the bulk of the complaints.
C++ is a great language, the sub culture that keeps unsafe C patterns alive in the language, not so much.
Example, back in the heyday of C++ frameworks bundled with compilers, bounds checking strings and collection arrays were the default.
Nowadays you either call at(), have your own bounded checked types, or enable the debug library, but naturally without influence on 3rd party depedencies.
The text-include compilation model is broken, larger projects pay dearly for this, and thanks to C++'s very long history & very long feature list, there is a veritable Babel of feature subsets & compiler opts to choose from. Adding new dialects and features is not an improvement here.
This is what the "modules" feature in C++20 addresses. People are complaining about modules a lot, I think because the spec is a little complicated... but the spec is a little complicated because the problem is a little complicated, and you can't pretend that complexity doesn't exist when you're writing the spec. (There are some other complaints about the modules system. It wasn't going to please everyone.)
New dialects and features were getting added to JS all the time, but what happens is people writing JS libraries or tooling would watch how far these features spread in their users' browser compile base, and many of these various features would never even make it into popular JS runtimes, while others are everywhere now.
I think it's a reasonable model for development--lots of people trying to improve things, the community slowly sifts it out, and the standards are the most conservative of all.
Is there something I'm missing? People are complaining because modules aren't supported yet? Isn't it reasonable to address this complaint by adding module support to compilers, and isn't this what's already happening?
As I understand it modules support goes beyond just compilers - yes you need support there, but also in libraries (std library still is still not available as a module yet but apparently in progress) as well as build tools (CMake, Bazel, etc.).
People are complaining because it’s 2022 and support for modules is seemingly not there or incomplete in all these places, and modules are talked about in some C++ communities as if they’re a thing that is actually usable (for example Bjarne’s talk at Cppcon a few days ago).
> People are complaining because it’s 2022 and support for modules is seemingly not there or incomplete in all these places...
So they're complaining on the internet about not getting free stuff fast enough.
They could work with the maintainers of their toolchains instead. If everyone that used xcode complained to apple, C++ modules would be done there already. MS likewise would probably put more resources on it. Open source is a little more complicated, but Red Hat and Canonical do have paid products.
> So they're complaining on the internet about not getting free stuff fast enough.
This is a very weird take.
Visual Studio isn't free for professional use. Xcode is free, but publishing for macOS/iOS is not. Both tools exists to serve platforms owned by the first and third largest companies in the world by market cap. Microsoft and Apple don't spend tens of millions of dollars a year on employee salaries for these tools out of the goodness of their heart.
In any case, the issue isn't with the toolchains. The issue is the C++ committee created a specification that has turned out to be problematic. Your next response may be to join the committee and fix it from the inside! Google tried that and rage quit to create Carbon. Sutter hasn't quit, but I think Carbon and Cppfront getting announced the same summer is not an accident.
Very Online engineers complain about slow progress on free stuff for the open source toolchains. And they don't bother to light up ticketing systems on paid products. Microsoft and Apple don't spend as much on these things as you would expext. Because relevant management thinks we don't really care that much. But the common thread is that sitting back and expecting the world to come to you isn't reasonable.
Incidentally, Google is a big place. Some Googlers are still involved in the ISO committees. Certain Googlers, admittedly influential ones, lost patience and started betting their reputations that certain dramatic moves would be a better choice.
I personally don't think ISO is the presidency of C++. C++ culture focuses on language design way too much and engineering (good third party libraries, supporting implementation, empirical evidence, etc.) not nearly enough.
Patching over the old work--hopefully without breaking anything--always is. I'm just stating that much of the hostility toward C++ comes from the fact that we have so many superior options to choose from now--options which profited from the lessons C++ learned the hard way, and incorporated them into v1 instead of patching them in as options at v23.
And this is not to strikeforce anyone with a "Rewrite it in Rust!", but to suggest that C++ is maybe not the best choice for new work in 2022.
> This is what the "modules" feature in C++20 addresses.
This is the perennial "If you only used modern C++, you'd be fine."
Sorry. People have been repeating this for 15 years, and it hasn't gotten any more true.
For example, C++ still doesn't have a useful string type--everybody rolls their own. How do you interoperate when 2 different C++ libraries can't even agree on something as basic as what a "string" is? The existence of "header-only" libraries and the contortions they go through is a tacit admission that compiling and linking C++ is still a disaster. C++ still doesn't have a stable, documented ABI--so everybody defaults to C (thus throwing away any theoretical usefulness C++ might have). Embedded shuts down 90+% of C++ beyond "C with classes" because they can't use exceptions and can't allocate memory dynamically. The preprocessor is a brain parasite that has grown with C/C++ and can't be removed anymore without killing the host. etc.
In fact, I would argue the lack of stable ABI is the only thing propping C++ up. Because the C++ ABI is so shitty, you have to make the top-level code C++ which then calls into the other libraries and subsystems which actually have useful, stable and documented ABIs and interfaces.
If C++ had an ABI that didn't suck, you could invert that, drive the C++ from the better language, and everybody would relegate C++ to increasingly tinier libraries until they could excise the remaining code and throw it into the trash.
I feel for the people who put in their entire lifetime trying to "improve" C++. However, it's time to admit that C++ can't be fixed and move on.
How amazing would it be to have people like Stroustrup and Sutter being paid to work on a language that doesn't start with unfixable suckage?
To be clear, I’m in the “C++ sucks” camp. But I don’t agree that C++ “can’t be fixed”. I’d rather say that some problems with C++ can’t be fixed, but others can be.
People will still use it, and that is entirely sensible and rational, and it makes sense to improve it. Long-term, maybe C++ will fade into the same kind of sunset that Fortran and Cobol currently occupy.
I’m not gonna make fun of Fortran users, or tell them to move on. There’s even a Fortran 2018.
Trying to allocate people to only the “best” possible work—say, replacing C++—is just poor allocation of resources.
> C++ is (I'm guessing) currently holding up an order of magnitude more applications than whatever you think is better than it.
So what. No one's going to rewrite all of those million lines of code in New C++, or whatever they call this incompatible syntax. It's just a distraction from more relevant efforts.
New code can be written in the new syntax, with full access to existing libraries.
I could easily see a company like Meta adopting this. They have both a huge amount of C++ code as well as actively developed guidelines and internal libraries that make use of cutting-edge features.
"I'm sharing this work because I hope to start a conversation about what could be possible within C++’s own evolution to rejuvenate C++, now that we have C++20 and soon C++23 to build upon."
Clearly this is relevant for c++ itself (coming from Sutter), so I'd say it's quite unfair/misguided to call it "just a distraction from more relevant efforts."
Of course they will. What they are less likely to do is to rewrite it in a completely different language (e.g. Rust.) C++ became popular in the first place because it could easily go along with C. No need to rewrite in bulk.
The thing is that C is still the lingua franca abi, so there's still no need to rewrite in bulk, you can still just use the same C libraries in another language, and rust is particularly well suited to doing that in roughly the same ways C++ is.
What's getting harder and harder to see now is why, if you need to write new or rewrite now, you'd choose C++ over rust. In the long run that's a recipe for only the most gnarly old codebases being written in c++ and no one wanting to touch them.
> The thing is that C is still the lingua franca abi, so there's still no need to rewrite in bulk, you can still just use the same C libraries in another language
No. C is the lingua franca for exported public stable ABIs, which is an extremely small subset of any given program's ABI usages.
C++ ABI is just as widely used as C's, just for internal unstable linkage instead of stable exported linkage. So yes you still need to rewrite in bulk to move off of C++, unless your code happened to be tiny & only used C API libraries.
We're starting new projects today, in 2022. (We do Signal Processing code for the Position, Navigation, and Time industry.)
It's all C and C++, and Python if we need a scripting front-end. (Prototypes are done in matlab). "Rust" doesn't even exist in our universe. I don't think the people on HN are really in touch with how real people program in the real world.
And on few the occasions when we have to do a complex desktop GUI app, we'll use C# or F#. We can get cross-platform Windows/Linux easily this way.
Rust is still fringe even at FAANG level companies. I work at one such and we have a service or two written in Rust. I like Rust as a C++ replacement in the right context, but for 99.999% of applications it’s not better in any qualitative or quantitative way than C++. Or Java. Or Go.
> Rust is still fringe even at FAANG level companies.
Yes, everyone knows that, except for HN users! While I'm sure there are managers at large companies who let some employees play with Rust, it's not used.
I mean, I'm a HN user who just left one FAANG for another and I'm pretty confident this is changing a lot faster than you think.
The thing that obscures this, I think, is that at most of them the surface area that the intersection of C, C++, and Rust that is high availability, security critical software, makes up a relatively small portion of what they do no matter what language it's in.
So while there's a lot of C++ at say, Google and Facebook (but relatively little at Apple IME), very little of it needs to be in c++ let alone Rust.
But where it matters? You better believe big companies are shifting towards "if you're starting new you should seriously consider Rust" (if not a mandate). And once you let one other language into your mix, the question becomes: why's all the high level stuff written in c++? May as well start new projects in Go.
Some are farther along than others but it's a thing.
There is a lot of C++ at Apple. It's of course possible to work there and never touch it, but it's also possible to work there and do almost all your work in C++. Many significant parts of iOS and macOS are written in C++, for example WebKit, the implementations of ObjC and Swift, clang/llvm, dyld, CoreAudio, CoreAnimation, WindowServer, Dock, Finder, etc.
I enjoy this[1] annual series of blog posts on Apple's usage of Swift in iOS. I just did a quick and dirty, but similar analysis of Apple's usage of C++ of the macOS 12.5.1 install on my computer. I extracted my dyld_shared_cache, and then used find and nm to count up binaries containing unstripped C++ symbols. This undercounts the usage of C++, since sometimes it's used only internally in stripped binaries, and I also think the number of binaries metric probably undercounts the importance of C++ because the ones that do use C++ tend to be more significant binaries, but even still it gives some idea of the scope of C++'s use at Apple.
About 25% (559 / 2292) of the libraries in the dyld_shared_cache contain unstripped C++ symbols. About 15% (22 / 154) of executables in /System/Applications (so 1st party apps / helpers that ship with the system) do.
That said, I think you're probably right that things are changing. Probably there are lots of people at Apple thinking of how they can replace C++ with Swift in their code. But on the other hand, I would not be surprised if we can still find significant uses of C++ in whatever macOS release we have 10 years from now.
To clarify: Relative to Google and Facebook, which have built up immense c++ codebases that span the entire companies.
I spent 8 years at Apple, mostly working in C and then rust, both even smaller pieces of the Apple language pie.
But I also did a little c++. I have commits in swift for eg (though I wasn't on the team and they were pre-oss, so you'd have to dig real deep to find them now).
Most of the services side of the company, which is where I was, was java ime. A lot of shift to Go in the last few years though. And that's a very quickly growing part of the company where you can't do a local check to find out language use. It's also the only part where rust is viable, because the product side of the company has much stricter limitations on what they'll build for distribution and as of when I'd left, adding rust to that mix was not even on the radar there.
But the places where rust was gaining were small but important, which is the main thrust of what I was saying.
But anyways, with how siloed apple is experiences can differ a lot, even beyond normal for bigcorps. :/
The number of teams at these companies where “you should seriously consider Rust” is a thing is approximately (not exactly) zero. Its adoption is still a novelty, and most of the impetus behind it is engineers looking to scratch an itch without any legitimate analysis of the benefits or trade-offs involved.
It may be changing, but certainly not faster than I’ve observed (it’s not a matter of speculation, for me).
> I don't think the people on HN are really in touch with how real people program in the real world.
I mean, I’m a real programmer in the real world. I think most people on here are. Rust is already in heavy use at Microsoft, Amazon, and Google, it’s not some fringe thing anymore.
I have worked at two of the companies you’ve mentioned, and I can confirm Rust is not heavily used in these companies. Maybe a few teams use it, but it’s not really supported, and you won’t find many internal libraries and tooling support for it.
They’re using it to write useful, novel production systems software though. Fuchsia is in Google Nest Hub, AWS Firecracker is used in ECS Fargate and fly.io. I would say those represent major commitments to the language, and they’re both also novel as systems projects too.
How do you write GUI apps in C# with Win/Linux support? Usually people don't go C# if they're doing GUI on other things than Windows. CLI/Server software is a first class citizen on Linux these days though.
> C++ runs a real risk of surviving only in the embedded/realtime space in the next 10 years.
Embedded/realtime? No way. The people I know in embedded won't touch C++ with your 10 foot pole.
Most of my embedded code is wrangling various "stacks" into cooperation with one another, and C++ helps me not one whit with that. At the other side, when I'm just poking sensors, C is more than enough.
And, as much I would really like embedded communication stacks to be in Rust, all that would happen would be the vendors slapping "unsafe" on everything so they would basically be writing C anyway.
This isn't a language declaration, it's a tool declaration. A tool useful for prototyping new language features and compiling them into large existing source bases.
> Clearly it has upsides, so it's baffling to me when people greet attempts to reduce the downsides with either "This doesn't make sense, just scrap it for something else" or endless nitpicks about the approach chosen.
People have been working to reduce the downsides of C++ for decades at this point, and the results are incredibly lackluster, and frequently only work in trivial toy examples - if even then. Every now and then I try out the latest and greatest I can find in terms of dangling pointer prevention, and find myself confused as to if I've even enabled the proper checks, when even my trivial toy examples fail to trigger said checks, only to be horrified when I in fact have enabled said checks and gained nothing.
I, too, welcome any useful results. The problem is: I don't actually find the results useful. I get told garbage like "C++ is safe now!" by people who apparently don't know any better, for things underperforming existing static analysis tools from over a decade ago. It's a distraction, a false promise, and a waste of my time - which could be better spent reviewing C++ code, improving existing static analysis coverage, or going full ham and unironically pushing for incremental rewrites in Rust. Not because it is perfect: I've written and debugged my share of dangling pointer bugs in unsafe Rust code too - but because it is at least better, and provides me with tools to fight the good fight.
And if, from time to time, you find my voice amongst the nitpickers of attempts at C++ "improvements", it's perhaps because the attempt has failed, and I would rather not have others waste their time like I have on false promises. Granted, such warnings are unlikely to work: the alure of something that might ward off yet more late nights chasing yet more heisenbugs in yet another C++ codebase caused by coworkers, third parties, those long gone, or perhaps worse - by me - is just too promising. But it might.
Show me real bugs, in real codebases, caught in bulk where existing tools have failed, and you will have my undivided attention.
I will probably get some flack for this but it's an effect of the Python generation. When you can pull in some libraries and build some bloated slow as hell code in minutes, no one cares what's under the hood. The tradeoff from performance (easily 1000s of times, I've seen it swapping out simple python libs for c++, specifically changing dicts to linked lists) to ease of slapping things together makes me sad daily.
Why would a trade-off make you sad? For most code that gets written, being able to quickly make something work is far more important than a 1000x increase in performance. Some projects do need performance, and for those no one should choose Python. For all the others, I'm glad Python exists.
Because there's lots of projects out there with absolutely horrible optimization. Doing stupid shit inside of loops etc. It turns into the blind leading the blind and churns out bad code.
That's true, but at the same time, modern C++ bears about the same resemblance to the C++ that many (if not most) of us learned years ago, as modern English bears to Middle English.
People have been trying to make the language safer and more palatable for multiple decades now, so it seems reasonable to allow Sutter to have a go at it.
C++ is like the English language. Extraordinarily popular, hard to learn, and full of weird cruft. You can write amazing things in it as well as terribly unsafe garbage.
In the other direction, the "all fields at default value serializes as size 0 message that you can't send" behavior has been an unending annoyance since switching to proto3.
That's a really good idea. I've been thinking of what I could do as the first little reference project after I split up the engine and project code. I think a small world with text chat and games like tic tac toe would be perfect.
All credit to https://kenney.nl/ for the art. These free isometric assets have been invaluable during development, and they look great!
Good point about the sprite editor. I've been struggling with finding interesting things to show in picture form since most of the work so far isn't graphical. The sprite editor and in-world build mode are things that I should show, though.
The thumb clusters are definitely a pain, I find myself only really using the 3 closest buttons one each side. Even then, the closest button on the upper row requires a stretch that gets uncomfortable after a while. It hasn't been bad enough for me to consider trying other keyboards though--the prices are too high for me to feel comfortable buying something I might not like.
To anyone that switches to a split keyboard, I strongly recommend also getting a trackball mouse (I use a Ploopy Adept). It lets you center the keyboard in front of you without needing to stretch too far while manipulating the mouse.