Generally you choose Python for the conciseness you mentioned, and then move the performance-critical functions into another language like C or (I find to be easiest) Cython. Ideally most of your code stays Python, and you either optimize self-contained pieces, or find library bindings that have done it for you.
A profiler like this can be used to identify which parts to rewrite in a faster language. Sometimes it's easier to write everything in Python first, then measure, than guess at the start which parts need to be fast.
You can also get gains by switching algorithms, both in pure Python and when using a compiled library like `numpy`. And there are also some operations, like string manipulation or the `sqlite3` module, where the Python runtime's implementation has already been optimized in a compiled language.