Hacker Newsnew | past | comments | ask | show | jobs | submit | danieka's commentslogin

I thought that the article would be about if we want AI to be effective, we should write good code.

What I notice is that Claude stumbles more on code that is illogical, unclear or has bad variable names. For example if a variable is name "iteration_count" but actually contains a sum that will "fool" AI.

So keeping the code tidy gives the AI clearer hints on what's going on which gives better results. But I guess that's equally true for humans.


Related it seems AI has been effective at forcing my team to care about documentation including good comments. Before when it was just humans reading these things, it felt like there was less motivation to keep things up to date. Now the idea that AI may be using that documentation as part of a knowledge base or in evaluating processes, seems to motivate people to actually spend time updating the internal docs (with some AI help of course).

It is kind of backwards because it would have been great to do it before. But it was never prioritized. Now good internal documentation is seen as essential because it feeds the models.


Humans can work with these cases better though because they have access to better memory. Next time you see "iteration_count", you'll know that it actually has a sum, while a new AI session will have to re-discover it from scratch. I think this will only get better as time goes on, though.


You are underestimating how lazy humans can be. Humans are going to skim code, scroll down into the middle of some function and assume iteration count means iteration count. AI on the other hand will have the full definition of the function in its context every time.


You are underestimating the importance of attention. You can have everything in context and still attend to the wrong parts (eg bad names)


Improving AI is easier than improving human nature.


Unfortunately, so far coding models seem to perform worse and break in other ways as context grows, so it's still best practice to start a new conversation even when iterating. Luckily, high-end reasoning models are now catching when var names don't match what they actually do (as long as the declaration is provided in context).


Or you immediately rename it to avoid the need to remember? :)


What I find works really well: scaffold the method signature and write your intent in the comment for the inputs, outputs, and any mutations/business logic + instructions on approach.

LLM has very high chance of on shotting this and doing it well.


This is what I tend to do. I still feel like my expertise in architecting the software and abstractions is like 10x better than I've seen an LLM do. I'll ask it to do X, and then ask it to do Y, and then ask it to do Z, and it'll give you the most junior looking code ever. No real thought on abstractions, maybe you'll just get the logic split into different functions if you're lucky. But no big picture thinking, even if I prompt it well it'll then create bad abstractions that expose too much information.

So eventually it gets to the point where I'm basically explaining to it what interfaces to abstract, what should be an implementation detail and what can be exposed to the wider system, what the method signatures should look like, etc.

So I had a better experience when I just wrote the code myself at a very high level. I know what the big picture look of the software will be. What types I need, what interfaces I need, what different implementations of something I need. So I'll create them as stubs. The types will have no fields, the functions will have no body, and they'll just have simple comments explaining what they should do. Then I ask the LLM to write the implementation of the types and functions.

And to be fair, this is the approach I have taken for a very long time now. But when a new more powerful model is released, I will try and get it to solve these types of day to day problems from just prompts alone and it still isn't there yet.

It's one of the biggest issues with LLM first software development from what I've seen. LLMs will happily just build upon bad foundations and getting them to "think" about refactoring the code to add a new feature takes a lot of prompting effort that most people just don't have. So they will stack change upon change upon change and sure, it works. But the code becomes absolutely unmaintainable. LLM purists will argue that the code is fine because it's only going to be read by an LLM but I'm not convinced. Bad code definitely confuses the LLMs more.


I think this is my experience as well.

I tend to use a shotgun approach, and then follow with an aggressive refactor. It can actually take a lot of time to prune and restructure the code well. At least it feels slow compared to opening the Claude firehose and spraying out code. There needs to be better tools for pruning, because Claude is not thorough enough.

This seems to work well for me. I write a lot of model training code, and it works really well for the breadth of experiments I can run. But by the end it looks like a graveyard of failed ideas.


What if I write the main function but stub out calls to functions that don't exist yet; how will it do with inferring what's missing?


I've always understood synoptic to mean "see together", that is, the synoptic gospels are meant to be seen together, since they are so similar.


This is the correct, the above relation to "synopsis" is a false etymology that only sounds plausible because of the sense of the common syn- prefix.


I was about to assert the same as you with as much confidence, but the etymology source I trust most (EtymOnline) nearly agrees with OP [0]:

> 1763, in reference to tables, charts, etc., "pertaining to or forming a synopsis," from Modern Latin synopticus, from Late Latin synopsis (see synopsis). It was being used specifically of weather charts by 1808. Greek synoptikos meant "taking a general or comprehensive view."

> The English sense "affording a general view of a whole" emerged by mid-19c. The word was used from 1841 specifically of the first three Gospels, on notion of "giving an account of events from the same point of view." Related Synoptical (1660s). The writers of Matthew, Mark, and Luke are synoptists.

The subtle change vs OP's is that EtymOnline does include some sense that the word 'synoptic' should be understood to describe the way in which the works relate to one another. But they do say that the connection to 'synopsis' is, in fact, part of the original intent of the usage.

[0] https://www.etymonline.com/word/synoptic


I thought "synoptic" meant "sharing common point of view", or "written from the same perspective", but I'm really not an expert on this.


From the FAQ:

> When will I receive my enemy?

> your enemy is already on its way. do not be alarmed.


The unions don’t make any money from forcing a collective bargaining agreement on Tesla. Unions make their money from membership dues, and all workers are free to join, or not to join, a union.


If Tesla can operate in Sweden without having to play the game, that makes the unions worry that other companies may follow suit. That would mean the union would lose power and future members.


They make money from members yes, but no one wants to be a member if they can't utilize their power over companies. That is why they do these kind of stunts.

Please explain who they are fighting for in this case? Not the workers because they aren't even on strike lol.


About 1.5 minutes for the build which is Vue 2 + Vuetify + Vite.

Our e2e test with Cypress take about 1.5 hours, but with parallelisation we get this down to about 10 minutes, half of which is setting up the Docker environment.


I don't know exactly how they work, but GDPR has some dispensations when data is used for academic purposes.


There are two taxes that apply to dividends in Sweden. First the company has to pay a corporate tax of 20.7% on profits. After that tax the person receiving the dividend has to pay a capital gains tax of 20%[1]. So the effective tax rate is 36% [2] which is about half (?) of the tax on labor.

Also, companies do not have to pay VAT on services and goods, so it's very common to by work tools and such through the company. Then you're able to buy stuff for untaxed money and without paying VAT. You're not allowed to pay for private living expenses through the company, but I know people that do that, which effectively makes the tax rate 0%. Until you get audited that is. Buying a computer, headset and paying for your phone service through the company is just fine.

[1] note that there is a cap on how large dividends you only pay 20% on, over that cap the tax rate is higher [2] if I know how to do math


It sounds like you only want to build the frontend yourselves? You could take a look at Cube[1] which will probably work fine as a backend for your in-app reporting.

[1] https://cube.dev/


+1, we use it and love it.

Cube is “headless BI”, which means you still have your own widgets at the front lend, and existing DB at the backend, and Cube sits in the middle helping you combine/filter/group/stat across your tables and other data sources.


Cube has saved me hundred of hours. I use it as backend for reporting and dashboard inside our SaaS. In our frontend I've build a light-version of PowerBI and I use Cube for a backend. Instead of manipulating SQL directly I use Cube's JSON query format. Kind of difficult to explain, but Cube might be the best piece of software I have ever used.

Maybe a good tagline would be "self-hostable Backend as a Service for data analysis"?


The key emphasis here being that The Gospel of Marcion is possibly the earliest gospel. The wiki article you link to clearly says that is a minority view not in line with early church tradition.


The earliest reference to the four canonical gospel comes from Irenaeus, who also references the at that time decades old Gospel of Marcion. The early church is not a reliable source about this, because it's strongly against Marcion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: