> For example, in a situation where you can strongly benefit from the Napoleon technique, and all the potential negative outcomes are minor and unlikely to occur, you will almost always want to implement this technique. Conversely, in a situation where there is even a moderate likelihood that this technique will lead to serious negative outcomes, you will likely want to avoid using it, even if it has some potential positive outcomes.
I swear, AI is decreasing everyone's reading and writing abilities.
Well written language conveys maximum information (or emotional impact, or etc) with minimum verbosity. AI is incentivized to do the exact opposite, and results in slop like the above.
The quoted paragraph above takes 71 words to say "You should do this technique if the positive potential outcomes outweigh the negative ones," which is such a banal thought as to have been a waste of the reader's time, the writer's time, and the electricity it took to run an AI to generate those sentences.
[dead]
increase productivity in invading countries and killing their inhabitants? is there any Attila Method or Pinochet Hack i could complement the Napoleon Technique with?
told my VCs I was going Napoleon mode and they gave me secondary