I think you hit the nail on the head with the analogy to Doctor GPT, but I think you missed it with gatekeeping. I don't think it's about gatekeeping at all.
A freelance developer (or a doctor) is familiar with working within a particular framework and process flow. For any new feature, you start by generating user stories, work out a high level architecure, think about about how to integrate that into your existing codebase, and then write the code. It's mostly a unidirectional flow.
When the client starts giving you code, it turns into a bidirectional flow. You can't just copy/paste the code and call it done. You have to go in the reverse direction: read the code to parse out what the high level architecture is, which user stories it implements and which it does not. After that you have to go back in the forward direction to actually adapt and integrate the code. The client thinks they've made the developer's job easier, but they've actually doubled the cognitive load. This is stressful and frustrating for the developer.
Web browsers, yes. With GUIs and games, it's a less clear. Of course you can write GUIs and games in any Turing complete language but there's still a lot of work to be done in finding the right ergonomics in Rust [1, 2].
Because fiddling with Windows firewall settings is a power user feature that only a fraction of a percent of users will touch. If it ever becomes more widely used, then I agree, all bets are off.
> Something is very wrong if it takes 20+ years to field new military technologies.
Is it? By what criteria? IMHO the point is to get new tech out quickly enough that you aren't falling behind other major powers in the international arms race. The F35 seems to be ahead of the competition because countries around the world are lining up to buy it over much cheaper alternatives from Russia (Su57) and China (J35).
Not to mention that the Su57 also had about a 20 year development cycle. Maybe that's just how long takes to develop a new stealth fighter?
> Learning what though? When I wrote software I learn the domain, the problem space, the architecture, the requirements, etc
You don't learn these things by writing code? This is genuinely interesting to me because it seems that different groups of people have dramatically different ways of approaching software development
For me, the act of writing code reveals places where the requirements were underspecifed or the architecture runs into a serious snag. I can understand a problem space at a high level based on problem statements and UML diagrams, but I can only truly grok it by writing code.
You're right, but also coding 10 years ago, 20 years ago, and 30 years ago looked very different to coding today in most cases. In every decade, we've abstracted out things that were critical and manual before. Are LLMs writing the code that much different than pulling libraries rather than rolling your own? Or automating memory management instead of manually holding and releasing? Or using if/else/for instead of building your own logic for jumping to a subroutine?
What happens when a researcher makes a generative art model and publicly releases the weights? Anyone can download the weights and use it to turn a quick profit.
Should the original research use be considered legitimate fair use? Does the legitimacy get 'poisoned' along the way when a third party uses the same model for profit?
Is there any difference between a mom-and-pop restaurant who uses the model to make a design for their menu versus a multi-billion dollar corp that's planning on laying off all their in house graphic designers? If so, where in between those two extremes should the line be drawn?
I'm not a copyright attorney in any country, so the answer (assuming you're asking me personally) is "I don't know and it probably depends heavily on the specific facts of the case."
If you're asking for my personal opinion, I can weigh in on my personal take for some fair use factors.
- Research into generative art models (the kind which is done by e.g. OpenAI, Stable Diffusion) is only possible due to funding. That funding mainly comes from VC firms who are looking to get ROI by replacing artists with AI[0], and then debt financing from major banks on top of that. This drives both the market effect factor and the purpose/character of use factor, and not in their favor. If the research has limited market impact and is not done for the express purpose of replacing artists, then I think it would likely be fair use (an example could be background removal/replacement).
- I don't know if there are any legal implications of a large vs. small corporation using a product of copyright infringement to produce profit. Maybe it violates some other law, maybe it doesn't. All I know is that the end product of a GenAI model is not copyrightable, which to my understanding means their profit potential is limited as literally anyone else can use it for free.
[1] https://cyberinsider.com/threat-actors-inject-fake-support-n...
reply