Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"safety" is really overrated. The paranoia-fueled security industry has turned programming into some sort of weird authoritarian dystopia.


Is this a joke? Do you really think managed pointers are an example of authoritarian dystopianism?


He is talking (I believe) about the general trend of nudging, cajoling more and more coders into using managed, very high level, safe languages and runtimes, and in general discouraging peeking under the hood, at the hardware level, as something raw, wild or unsafe. Yes you can still do it on a RPi, but perhaps in another decade or so, you might not be allowed to program in 'unsafe' languages on all other mainstream platforms, unless you register for a driver/system developer license or something, or not even that.

The tinkerer/hacker ethos is disappearing slowly from PCs. It never caught on in the mobile world. It may perhaps only survive as a remnant in specialised chips and boards designed for learning.


> the general trend of nudging, cajoling more and more coders into using managed, very high level, safe languages and runtimes

This is a good thing: these languages and runtimes are indeed much safer, and also much more productive than C. You can even still get the same amount of low-level control with Rust.

> in general discouraging peeking under the hood, at the hardware level, as something raw, wild or unsafe.

C is unsafe though. Decades of experience writing software has shown us that even expert C programmers write memory bugs with regularity.

> Yes you can still do it on a RPi

Or any other PC really.

> but perhaps in another decade or so, you might not be allowed to program in 'unsafe' languages on all other mainstream platforms, unless you register for a driver/system developer license or something, or not even that.

Lunacy. What is the evidence for this?

> The tinkerer/hacker ethos is disappearing slowly from PCs.

It was only ever there in the first place with a tiny minority of users, and that minority seems as committed to their craft as they've ever been.


Lunacy. What is the evidence for this?

Look at all the locked-down walled-garden platforms proliferating, and this famously prescient story: https://www.gnu.org/philosophy/right-to-read.en.html

20 years ago, many people thought RMS was a completely insane lunatic. Yet now he seems more like a prophet.

It's not hard to see where things are going if you read between the lines. Increasingly, "safety and security" is being used to exert control over the population and destroy freedom. Letting your children play outside unsupervised is "unsafe". Non-self-driving cars are "unsafe". Eating certain food is "unsafe". Having a rooted mobile device is "unsafe". Not using an approved browser by a company that starts with G is "unsafe". ... Programming in C is "unsafe".

"Freedom is not worth having if it does not include the freedom to make mistakes."


> Look at all the locked-down walled-garden platforms proliferating

I don’t think I’m getting the connection here —- Rust was incubated at Mozilla and is now managed by its own open-source foundation. There’s nothing particularly closed or “walled garden” about it.

By contrast, Apple’s ecosystem is the canonical example of a walled garden. But it’s overwhelmingly programmed in unsafe languages (C, C++, and Objective-C). So what gives?

> It's not hard to see where things are going if you read between the lines. Increasingly, "safety and security" is being used to exert control over the population and destroy freedom

This is an eye-poppingly confusing confabulation: in what world am I any less free because the programs I write and use have fewer trivial vulnerabilities in them? What freedom, exactly, have I lost by choosing to crash less?

You bring up the GNU project; their background is explicitly rooted in Lisp: one of the very first safe, managed languages. The unsafety and comparative messiness of C is one of their standard bugbears. That hasn’t stopped their message of political and software freedom, as you’ve pointed out.


Actually GNU project is one of the culprits for C spreading into a world of that was already moving into C++ and other safer languages.

> When you want to use a language that gets compiled and runs at high speed, the best language to use is C. C++ is ok too, but please don’t make heavy use of templates. So is Java, if you compile it.

https://www.gnu.org/prep/standards/html_node/Source-Language...

20 years ago, it was more like

> When you want to use a language that gets compiled and runs at high speed, the best language to use is C. Using another language is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. For example, if you write your program in C++, people will have to install the GNU C++ compiler in order to compile your program. > > C has one other advantage over C++ and other compiled languages: more people know C, so more people will find it easy to read and modify the program if it is written in C.

http://gnu.ist.utl.pt/prep/standards/html_node/Source-Langua...

Thank GNU for C.


> Actually GNU project is one of the culprits for C spreading into a world of that was already moving into C++ and other safer languages.

Both of these things can be true! GNU has advocated for C for some pretty asinine reasons. At the same time, they’ve ported all kinds of Lisp idiosyncrasies into their style guide.


With the possible exception of Rust, safety always had performance implications. Forcing people to write their program in C# or Swift or Java causes many programs to be slower than they really need to be, forcing us to either wait on them, or buy a faster palmtop.

(Now most devs don't care about performance, so they don't see that as a problem. As a user however I can tell you, I hate when my phone lags for seemingly no good reason.)


> With the possible exception of Rust, safety always had performance implications.

This is common piece of received wisdom, but I don't think it's held up well over the last decade: both Java and C# have extremely well-optimized runtimes that perform admirably after an initial startup period, and (I believe) Swift compiles to native code with many of the same optimization advantages that Rust has (e.g., simpler alias analysis).

At the same time, C++ has seen a proliferation of patterns that are slightly safer, but perform miserably at scale: smart pointers (unnecessary lock contention), lambdas (code bloat, more I$ pressure), templates (I$), &c. C avoids most of these, but C written by "clever" programmers has similar pitfalls.


It should be tested, but I don't think that a JIT compiler can beat an ahead of time compiler when the memory isn't the bottleneck.

Sure, if what you're competing against is some kind of pointer fest, forget about locks, just jumping around the memory will trash your cache, and it won't matter how optimised your local processing is. But if you follow some data oriented principles and take good care of your memory access patterns, I have a hard time imagining Java or C# beating C++ or Rust.

Now there's this peculiar version/subset of C# that Mike Acton was promoting for Unity… though I'm not sure that counts.


> "Freedom is not worth having if it does not include the freedom to make mistakes."

That's only true to a point. Many mistakes are costly, and those costs are often born by other people. So it's reasonable to have protection against mistakes, for the benefit of both the person who would make them and the other people that they would affect.

When it comes to computer security in particular, an easily compromised personal computer can be devastating to the livelihood of the person whose computer was compromised through no fault of their own (remember, most people don't know anything about computer security, and they shouldn't have to), and can also harm others, e.g. if the computer becomes part of a botnet. If that computer is part of an organization, then the mistake made by a programmer can affect the ability of that organization to provide important, even essential, services. This is what's driving the increased focus on safety in this context.

I realize we're drowning in cynicism these days, and it's tempting to think that it's all an evil conspiracy to take away our freedom so a few people can make more money or have more power. Such a narrative resonates with something primal in us that's reinforced by the sort of simplistic good versus evil stories that make up so much of our entertainment. Reality is messier, more nuanced, and not as neatly connected as our puny pattern-seeking brains would prefer.


20 years ago many people earned a living selling commercial compilers, and the PC was the exception to vertical integration.

Plenty of people knew what RMS was talking about.

But GPL was very bad for business, said those there were against it, so the new world of shareware and public domain is here again.


> these languages and runtimes are indeed much safer, and also much more productive than C. You can even still get the same amount of low-level control with Rust.

Rust is not a C analog. The whole value proposition of C is simplicity, and Rust is anything but simple.

>> but perhaps in another decade or so, you might not be allowed to program in 'unsafe' languages on all other mainstream platforms, unless you register for a driver/system developer license or something, or not even that.

> Lunacy. What is the evidence for this?

Look at a platform like Apple. Every release makes it harder to run arbitrary code.

>> The tinkerer/hacker ethos is disappearing slowly from PCs.

>It was only ever there in the first place with a tiny minority of users, and that minority seems as committed to their craft as they've ever been.

What do you mean? In early PC's, the way you ran software was to copy code from a magazine and compile and run it on your workstation. Being a PC user at all meant being a tinkerer/hacker a few decades ago.


> In early PC's, the way you ran software was to copy code from a magazine and compile and run it on your workstation. Being a PC user at all meant being a tinkerer/hacker a few decades ago.

Bullshit. Except for the brief period of time when the Altair was the only thing going on in the Micro space… the Apple II, Atari 800, IBM PC and TRS-80 amongst others were marketed in the late 70s/early 80s with off the shelf, ready to run software. While copying code out of a magazine was something you could do, it wasn’t even the common case then.

> Every release makes it harder to run arbitrary code.

I have not experienced this. Yes Mac OS makes it harder to run random stuff downloaded from the internet, but Llvm, clang, cmake, python from the command line works the same as they always have (you are fetishizing code that is entered yourself after all).


The new windows does not even run on hardware which doesn't have TMP. You really don't see signs that computers are getting more closed?


PC was an accident caused by IBM's failure to put Compaq into line.

All other platforms were hardly any different from Apple, in fact Apple is just like they always have been


Rust is arguably simpler than C++ to comprehend, and sure, more complex than simple C.

But the complexity argument is overblown.


I didn't say anything about C++.

Rust is a very complex language. You can argue about whether it's more or less complex than C++, but it's certainly on that end of the spectrum. C is way on the other end.

That's not a value judgement of Rust, just an observation.


> C is way on the other end.

Rust is complex, but I think it’s honest in its complexity: it straddles programmers with lifetime management in exchange for better optimizations (alias analysis is a pain in C!) and memory safety.

This is in contrast to C: it’s very easy to write C that compiles, but very difficult to fully evaluate its correctness. Part of that is the extraordinary complexity of the C abstract machine, combined with its leakiness: writing correct C requires you to understand both your ISA’s memory model and the set of constraints inconsistently layered on it by C. That’s a kind of pernicious complexity that Rust doesn’t have.


There is something true in what you are saying, but I still think the difference in complexity between Rust and C has much more to do with the very different goals of the languages rather than C just hiding complexity from the programmer.

Rust targets a much higher level of abstraction than C. The machine itself is at arms-length, and you are mostly thinking in terms of an abstract type system and borrow checker rather than a CPU and memory system. A lot of rust programming is declarative, and a lot of the complexity comes from finding the right way to express your intent to the compiler through the various systems of the language. The tradeoff for that complexity is that you get to write programs with very strong safety guarantees, correctness benefits, and low performance overhead.

C is about having simple, imperative control over the computer hardware. With C you are thinking in terms of the CPU and the memory system, you are mostly just telling the computer exactly what you want it to do.

C definitely does have some failings: for instance as you allude to, C doesn't ensure that all failure modes are encoded in the function signature, so it's not really possible to audit a C program for correctness by reading the source alone, the way you can almost do with Rust.

But that doesn't mean that the level of complexity which comes with Rust is necessary to fix the issues with C. Zig is a good example of trying to plug some of C's holes without increasing the level of abstraction.


> Look at a platform like Apple. Every release makes it harder to run arbitrary code.

The original claim was that "you might not be allowed to program in 'unsafe' languages on all other mainstream platforms". But Apple restrictions don't distinguish between safe and unsafe languages, they just restrict all arbitrary code, so this is not an example of the point being made, but rather an orthogonal issue.


I was responding to the broader point that security is used as justification for making systems less accessible to programmers.

Apple doesn't distinguish between safe and unsafe languages for now, but it's not impossible to imagine this becoming a restriction in the future, given the broader trend.


> Rust is not a C analog. The whole value proposition of C is simplicity, and Rust is anything but simple.

I would say the value proposition is control and performance, and more pragmatically ubiquity. If the value proposition were simplicity, why aren't C programmers writing Lisp instead? If it's simplicity and control, why aren't they writing assembly? At this point, C is little more than a bad abstraction that people are nostalgic for.

> Look at a platform like Apple. Every release makes it harder to run arbitrary code.

No, it doesn't. It has not gotten harder to run arbitrary code. It has gotten harder for developers to distribute unsigned applications. I've been using Macs for 10 years and my setup process throughout that whole time has been: xcode-select --install, install Homebrew, get on with my life. The OS never interferes with my programming beyond that.


> I would say the value proposition is control and performance, and more pragmatically ubiquity. If the value proposition were simplicity, why aren't C programmers writing Lisp instead? If it's simplicity and control, why aren't they writing assembly? At this point, C is little more than a bad abstraction that people are nostalgic for.

Because it is hard model hardware in idiomatic Lisp and Assembly in not portable and not very productive. C is somewhat simple, portable, productive and fast language for writing code that is as close to a machine without having to use Assembly. It can be easily combined with Assembly when needed. Barring C++ it has the biggest tooling support of any other language available.


I agree - C is a sweet-spot language. It shows its age in certain ways, but it remains a relevant language 50 years after its inception because it strikes a very pragmatic balance between being simple and easy to understand, and in being a fairly thin abstraction over the hardware.


>What is the evidence for this?

There are entire CLASSES of computing devices which you cannot put arbitrary code on without severe obstacles...


How exactly is it relevant to the topic? What does it have to do with ditching C?


This is a good thing: these languages and runtimes are indeed much safer, and also much more productive than C. You can even still get the same amount of low-level control with Rust.

How do you bootstrap languages like Rust? Another 'safe' language? What about that one?

Someone somewhere has to be working at the asm level.


Bootstrapping and language safety are orthogonal. C is unsafe and still you can't bootstrap it if you don't already have a compiler which can compile your C compiler. According to that logic even assembly is not low level enough because you need an assembler to make a runnable program out of it.


Source code available,

http://www.projectoberon.com/


Safety above all is is the path towards slavery.

This is true in both politics and software.


Safety is complete orthogonal to being a bare-metal language (See Rust). You can have a completely locked down platform with an unsafe language (See iOS).

I'd argue that anyone who thinks language safety is some authoritarian handcuff doesn't really understand low-level programming to begin with.


I actually think safety should be guaranteed at an OS/hardware level and not at a software level. If it's guaranteed that my process can only make a mess inside it's own memory allocations, let the software be as unsafe as it wants.


Then you'll be happy to learn that what you propose has been the case for consumer computers since protected mode was added to Intel 80286 processors in 1982.

I think few people in this discussion are worrying about programs directly affecting other programs through memory unsafety, exactly because this doesn't really happen for software that isn't a driver or inside the OS kernel. The problem with memory unsafety is that it often allows exploits that corrupt or steal user data, or gain unauthorized access to a system. That's not a problem when you are the only user of your software and you only have access to your own data, but once you have other peoples data or run on other peoples system I think you should at least consider the advantages of using a safe(r) language.


But I don't understand how data stealing can happen if each process is effectively sandboxed. If my process can't read or write to memory outside of what it allocated, how can I corrupt or steal user data?


Well it depends on your definition of sandboxed. Does your program have permission to perform I/O, either by reading/writing to a filesystem or sending/receiving data on a network?

Most "interesting" programs can perform I/O. Then you run into ambient authority and confused deputies.


Yeah I guess it seems like a decent model for "safe" software would be sandboxed memory, and fine-grained file permissions. Arbitrary network traffic is a bit less dangerous - I mean someone could steal CPU cycles to process data and send it over network, but a safe language is not going to save you from that either.

Most programs do not need arbitrary access to the file system, and it should be the OS's job to whitelist which files a program has access to. Again, a safe language is not going to save you from bad behavior on the filesystem either. It really only solves the memory problem.


> Most programs do not need arbitrary access to the file system, and it should be the OS's job to whitelist which files a program has access to. Again, a safe language is not going to save you from bad behavior on the filesystem either. It really only solves the memory problem.

Except that it is often a memory-safety problem that enables an attacker to make a program misbehave, through things like buffer overflows. A memory-safe program is much harder to exploit.


Are we talking in circles here? My original point was that memory safety should be ensured by the OS/hardware. That way no matter how badly my program misbehaves, it will not be able to access memory outside of the sandbox. In other words, the CPU should not be able to address memory which has not been allocated to the current process. A buffer overflow should be a panic.

Even with a safe language, there's vulnerabilities like supply chain attacks which allow malicious code to use an escape hatch to access memory outside of the process. I.e. I could be programming in Rust, but one of the crates I depend on could silently add an unsafe block which does nefarious things. OS/hardware level sandboxing could prevent many such classes of exploits.


> That way no matter how badly my program misbehaves, it will not be able to access memory outside of the sandbox

The problem is not about memory outside of the sandbox, but inside. Please read about return-oriented programming, for example, where a buffer overflow bug of a process can be exploited to hijack said process to do work it was not meant to do normally. If this error happened for example in sudo, it can very well be used to gain privileges and do real harm — and this whole problem domain is almost entirely solved by using a “safe” language.


In case of a browser, a buffer overflow can be exploited to upload user files for example — which are very much readable without trouble on most linux distros.


But again, isn't that the OS failing to protect user files rather than an issue of memory unsafety?


That's another aspect of it. Please see this answer of mine:

https://news.ycombinator.com/item?id=27642630

In short, memory unsafety makes programmer bugs exploitable, instead of generally just failing.


I understand what you are saying, and I understand that this is a real security issue in modern computing. However I would put the question to you in a different way:

Let's say we have two programs, A and B.

Program A by its very nature needs to have write access to the system's file permissions in order to fulfill its core purpose.

Program B only needs R/W access to a sqlite database installed in a specific directory, and the ability to make network calls.

I would agree that for program A, a memory-safe language can provide a very real benefit, given the potential risk compromising this program could expose the system to.

Would you agree that if a buffer overflow exploit in Program B can be used to compromise the system outside of the required resources for that program, this is a failing of the OS and not the programming language?


I agree with that — not having buffer overflows is a good to have but not sufficient thing for security. MAC and sandboxes are a necessity as well, eg SELinux can solve your proposed problem with program A and B.


to be clear, you're claiming that language constructs for avoiding massively prevalant use-after-free bugs (unique_ptr) will lead us all to lose control of our devices?

Nobody's suggesting we replace all the C in the world with signed javascript from Google, we're literally talking about compile time checks for pointers here.


unique_ptr is a conspiracy, man. You see, the people who make money off of mallocs (those corrupt DRAM manufacturers) want it to stay that way. Just follow the money.


Thread safety is like bicycle helmet laws, discuss..


Well, kind of. I would argue that most code does not need to be thread safe because it is not intended to run in a multi-threaded environment. I once worked on a application where it was 'standard' to run everything in a thread pool so basically everything could run simultaneous to everything else. Problem was that there also was lots of state to manage. So then one ends up giving every class one or more locks. Also, this particular application was not the high-performance part of the application. Obvious solution is to run most of the application in a single thread message loop and get rid of all of the locks. This appears to be heresy nowadays though. The high profile C++-ers tell us that everything has to be thread safe.


> Well, kind of. I would argue that most code does not need to be thread safe because it is not intended to run in a multi-threaded environment.

In that case you are actually thread safe, though! Just use a language that lets you specify which data can't be sent across threads (Rust isn't the only example, Erlang enforces this entirely dynamically) and use thread locals instead of statics (in a single threaded environment they're effectively the same thing), and tada, you have thread safety that continues to work even if people decide to run your stuff on many different cores.


One would, but without a barrier the head hit before slowing waiting for the group the ground to get out of the critical section :)


"I know my software just works" is really hubris. The fly-by-the-seat-of-our-pants game industry has cranked up programmers egos and made them ignore a wide range of tools and practices that have been proven time and time again to improve developer velocity and reduce defects.


"proven" sounds like cargo-culting dogma. The same was said of OOP in the 90s, and look what that caused. Hence my distrust of the snake-oil.

Also, my real-world experience with wading through the abstraction insanity often seen in C++ (and justified because it's "safer") to find and fix bugs, and even more so with the sheer baroqueness of Enterprise Java (arguably an "even safer language"), shows that "reduce defects" is more like a dream. Maybe the number is reduced but when one is found, it tends to be harder to fix.

Put another way, I'd rather fix relatively simple C (which also tends to be simpler code in general) than the monsters created by "modern C++" because they thought the "added safety" would mean they could go crazy with the complexity without adding bugs. Perhaps there is some sort of risk compensation going on.

The saying "C makes it easy to shoot yourself in the foot; C++ makes it easy to blow the whole leg off" comes to mind.


> Put another way, I'd rather fix relatively simple C (which also tends to be simpler code in general) than the monsters created by "modern C++" because they thought the "added safety" would mean they could go crazy with the complexity without adding bugs.

It's completely possible to write C++ code without it being a mess of a template mostrosity and massively overloaded function names. People who write C++ like that would write C filled with macros, void pointers and all the other footguns that C encourages you to use instead.

I've been working with the sentry-native SDK recently [0] which is a C api. It's full of macros, unclear ownership of pointers (in their callback, _you_ must manually free the random pointer, using the right free method for their type, which isn't type checked), custom functions for working with their types (sentry_free, sentry_free_envelope), opaque data types (everythign is a sentry_value_t created by a custom function - to access the data you have to call the right function not just access the member, and this is a runtime check).

See [1] (their C api example). With function overloading and class methods it would be much more readable.

[0] https://github.com/getsentry/sentry-native [1] https://github.com/getsentry/sentry-native/blob/master/examp...


There's a bug difference between extremely complex c++ templates and std::unique_ptr, std::string_view, and constexpr. Also, I've heard many game devs still saying unit tests either take too long to write or they aren't helpful.


I think that if we focused on building small, simple programs that do one thing well and compose, C would be OK. It’s when we build out behemoths that you really have a hard time reasoning about your code. At that point, vulnerabilities are almost guaranteed. This is true in any language, but more so in unsafe ones.

Maybe the suckless guys are on to something.


The complexity that must arise (otherwise the problem we are looking at is not interesting enough) will happen either way. Composing small tools will give you an ugly as hell glue code over them — just imagine a modern browser. Would it really be better to do curl and interpretJS and buildDOM and all these things? Just imagine writing the glue code for that in what, bash?

We pretty much have exactly that, but better with programming languages composing libs, functions and other abstractions. That’s exactly the same thing but at a different (better) level.


For the corpse it makes little difference how it got hit.


its amazing to read such a contrarian viewpoint, i dont agree but its somehow fresh to read


And on the flip side tech gets Leetcode interviews, shoehorned microservices when you dont need it, slow web browsers.

The game industry iterates far faster and the result are programs that can handle far more features than the average tech methodology. It's the classic quantity leads to quality pottery grading experiment. Have you ever considered that these 'best practices' pile on so much unneeded crap that an experienced developer doesn't need?


I would not necessarily say that game development has better quality than web browsers. And the latter are anything but slow — they are engineering marvels no matter what you think of them. It’s just that websites like to utilize it shittily.


There's a lot of games out there, way more than there are web browsers. For a start try comparing games with manpower/dev support levels similar to a web browser like chrome. If we take a AAA open world game, the game somehow gets more features done compared to chrome. There's something that can be learnt there.

Also last I heard there was a startup aiming to solve slow browsers by running chrome in a server and streaming a video of the window somewhere. If that's not a setback I don't know what is.


> There's a lot of games out there, way more than there are web browsers.

Maybe this should tell you something about the relative complexity of the two problems. And frankly, features in a game are non-comparable to browser features.


You're missing the point. You can't make such a claim about complexity based off the amount of software there is. Games are by far the more popular software to make. This is why I've narrowed it down for you, hopefully you can understand that.

> features in a game are non-comparable to browser features

You can't hand-wave this away. I'm certain you need a lot more math knowledge if you want to implement something like physical world. Does a web browser need that?


The fact is that a single person with a decent amount of knowledge can write a game engine that is more or less complete, while not even FANG companies can write a web browser from scratch should definitely be proof that the latter is more complex.

Some physics and linear algebra, while I’m not saying is easy, but it is not a complex layouting and CSS engine, with a state of the art language runtime, with all the possible requests, sandboxing, etc. — of course you don’t necessarily have to write an optimized browser, but still, just implementing a usable subset of the web is ridiculously hard.


> The fact is that a single person with a decent amount of knowledge can write a game engine that is more or less complete, while not even FANG companies can write a web browser from scratch should definitely be proof that the latter is more complex.

Again you're trying to backtrace the results to the complexity of the task and the linkage simply does not make sense.

> Some physics and linear algebra, while I’m not saying is easy, but it is not a complex layouting and CSS engine, with a state of the art language runtime, with all the possible requests, sandboxing, etc. — of course you don’t necessarily have to write an optimized browser, but still, just implementing a usable subset of the web is ridiculously hard.

Well it seems to be easier because you can literally look it up through the internet and implement it as a set of rules. The hard part would be the combinatorial number of cases. If you don't have the math requirements to make a game with a 3d world from scratch it will not feel good.


I'm a game dev and imo web-devs are the 'seat-of-our-pants' guys. In games we tend to use compiled and statically typed languages. They're pretty strict in what they allow and many errors are picked up by the IDE and/or prevent your code from even compiling.

Whereas javascript.. Your code could be doing almost anything yet it will run just fine. Also, cus its not compiled the IDE for javascript is much much less helpful (eg. theres no "find all references") and much more permissive.


I think it depends on a lot on the studio and culture. I don't think I saw a single unit tests before I left gamedev(a few smoketests to make sure the game didn't crash but that was about it).

I also rebuilt our audio streaming system over the course of 48 hours to use the texture streaming subsystem when we exceeded the 64 file handle limit on an certain platform. We needed to hit a date for a TGS demo and I can guarantee you that we had things which were even more YOLO for a fairly decently sized team/game.


To be honest, the game industry is a good counterpoint to TDD zealotry. You can go quite far with adequate results without a single unit test.


Some of the most buggy software I interface with are games. They crash and break in strange ways. I often wonder what it would be like if someone tested some edge cases or enabled a fuzzer for some functions. Like "what happens if I kill a freed entity", or "what happens if my character is 50% in a wall", etc.

Some of these bugs are experience ruining: think Fallout 76.


I for one appreciate fewer programs segfaulting due to unexpected input, the security is just a bonus


there's a lot of overlap between safety and correctness.


Yeah man bring back peek() and poke() haha live life on the edge!


You forgot to add /s to the end of your comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: