From WSJ article:
> The AI bot trounced all except one of the 10 professional network penetration testers the Stanford researchers had hired to poke and prod, but not actually break into, their engineering network.
Oh, wow!
> Artemis found bugs at lightning speed and it was cheap: It cost just under $60 an hour to run. Ragan says that human pen testers typically charge between $2,000 and $2,500 a day.
Wow, this is great!
> But Artemis wasn’t perfect. About 18% of its bug reports were false positives. It also completely missed an obvious bug that most of the human testers spotted in a webpage.
Oh, hm, did not trounce the professionals, but ok.
Fair, but if you look at most tools for Static Code Analysis they will have equal or worse performance with regards to false positives and are still seen as added value.
If this is inexpensive (in terms of cost/time) it will likely make business sense even with false positives.
We cannot consider this report unbiased considering the authors are selling the product.
False positives on netpens are extremely common, and human netpen people do not generally bill $2k days. Netpen work is relatively low on the totem pole.
(There is enormous variance in what clients actually pay for work; the right thing, I think, to key off of is comp rates for people who actually deliver work.)