1 Comment
User's avatar
User's avatar
Comment removed
Nov 26
Comment removed
Atilla Bilgic's avatar

I want to highlight one important topic before answering your questions.

All the emerging coding platforms running as an application layer over LLMs that are pattern following systems based on their training data. We all know that in the publicly available code bases that all these LLMs trained primarily are containing many flaws and these have been ingrained to the LLM weights unfortunately. This is the underlying fact of these statistics.

I totally agree about the speed boost. I developed a solution in 2 weeks time with AI assistance and normally should take more than one month. But I have spent most of my time with comprehensive task planning like I should do in a scrum team and core review cycle to be sure the delivery in on a certain quality level.

If you establish a simple Continuous Integration (CI) pipeline and facilitate Static Application Security Testing (SAST) step in that pipeline, you can catch majority of the unsafe patterns or potential flaws.

For the audit trail part. If you are spend you time on planning and repository creation, this becomes easier. You always have a reference what you have requested and what you have got in the task definition and merge/pull request content. Only additional information should be which coding model and which version needed to be added to this.

So far I did not see any automation or tooling to be setup as git hooks. I hope we can have something soon that takes these manual steps out of the equation.