

Do you know of any reports that AI companies did this intentionally? It would make business sense for them to make them addicting


Do you know of any reports that AI companies did this intentionally? It would make business sense for them to make them addicting


“Why does everyone hate AI?!?!”
I loved python when I was a junior dev. Now I hate it (except for things like computational math). I have to add debug statements to figure out that someone snuck in the wrong type into the code.


I find that it basically can’t do decent architecture. My last attempt to use it ended with it using casbin, but then rewriting it’s own authorization framework and trying to use both at the same time 😶.
I think there is a lot of power here, but it needs very heavy guidance and handholding to do it well. Otherwise it makes very stupid intern level decisions


Ideally they’d compare time to write + time to fix. My experience is that if you use test driven development, LLM isn’t too bad. No worse than an intern.
I think it comes down to who is using the LLM. I had a junior dev once “presumably” AI gen a ton of code (broken trash). Then to fix it, they wrapped each function in a try catch block that dropped the error. Unit tests were mocked out to the extent they didn’t test anything.
When I use an LLM, I have tests and hard constraints on the LLM. It isn’t good enough to do everything, but it can generate about 80% of a simple app
I had a co-worker who wrapped every function generated by AI with try/except (discarding the error).
The unit tests passed flawlessly!!! They also didn’t test anything at all
I was livid. My boss was an idiot and I tried to explain the issue, but I got in trouble because I took too long to fix it. 1000 of lines of mangled shit.
I just found a new job (got more pay too)