Many of the codes designed by artificial intelligence have major security holes
Creating complete code, as it quotes a lot of its content from programmers in GitHub, however, its code is weak in terms of security and has many vulnerabilities that can be exploited.
According to the results of a group of researchers from Stanford University, the codes used by programmers through artificial intelligence tools such as the Copilot tool have security problems and vulnerabilities in those codes at a higher rate than programmers who do not use artificial intelligence tools. In a statistic that was made, it was found that about 40% of these codes can be exploited for hacking, but it is expected that this percentage will decrease with the development and learning of these programs.
But the strange and funny thing is that programmers who use artificial intelligence software tools have more confidence that their codes are better and less in terms of errors and security vulnerabilities than programmers who do not use artificial intelligence tools, and this indicates great confidence in artificial intelligence software that is not in place.