Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just commented this on a related story, so I'll just repost it here:

Can’t help but wonder about the reliability and security of future software.

Given the insane complexity of software, I think people will inevitably and increasingly leverage AI to simplify their development work.

Nevertheless, will this new type of AI assisted coding produce superior solutions or will future software artifacts become operational time bombs waiting to unleash the chaos onto the world when defects reveal themselves?

Interesting times ahead.



Humans have nearly perfected the art of creating operational time bombs, AI still has to work very hard if it wants to catch up on that. If AI can improve the test:code ratio in any meaningful way it should be a positive for software quality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: