However, there are serious problems with Python as well. Some of the benefits of Python are also curses.
Python is fast, but not as fast as compiled languages such as C, OCaml or Haskell. In addition, it is not JIT compiled like some other interpreted languages such as Java and C#, so it is a bit slower than these languages also. However, it is worth noting that both the compiled and JIT interpreted/compiled languages tend to be statically typed, losing the benefits of Python's flexible dynamic typing.
Python's dynamic typing can also be a double-edged sword. While it does allow rapid programming, it also doesn't catch as many bugs as a static type system. Proponents of static typing claim that most errors are type errors. It's very notable that in languages like Haskell, most errors are caught at compile time. If a Haskell program compiles, it most likely works as intended. Proponents of dynamic typing counter that if you use contracts and unit tests, type errors will be caught along with other errors that aren't type errors, and that dynamic languages simply allow you to choose the level of test coverage you want instead of having to use types all the time.
I don't know the solution to this debate, but I will note that whenever you use the word "if" before a counterargument about language features, it's generally not a good argument. Arguments that begin with phrases like "if you use contracts and unit tests" rely on good developer practice. Every programmer who has worked on a team knows that such "if"s get thrown out the window for a deadline, or simply not done because the programmer is inexperienced. It's extremely difficult for even great developers to keep a good level of test coverage. If the language doesn't force you to do it, it won't get done consistently.
However, I'd like to add that most of the proponents of static typing use languages that don't have strong enough typing for their arguments to be legitimate. Take the following examples:
printf("Hello, my name is %s.",10);
if(i = 1) cout << "i == 1";
int i = 1.4;
These simple examples would be caught in more strictly-typed languages like OCaml or Haskell. If you write code that relies on the type system to catch errors in C, C++, or Java, you shouldn't have problems, but again that's an argument based on an if.
The next issue I point out with Python is its whitespace delimitation. While in general, whitespace delimitation keeps your code and syntax clean, it also presents problems when embedding code within code of another type (i.e. using Django or another web framework) because one might one to put multiple commands on the same line or maintain consistent indentation. It also causes problems when switching editors (as evidenced by the long-running tabs vs. spaces debate).
Lastly, I'll mention lambdas. Python lambda functions can only be one-liners, as if included in a return function, such as:
lambda x : x + 1
This returns a function which takes one argument, increments it by one, and returns the incremented value. The argument for keeping Python lambdas as they are is that there is nothing that can't be done without them. This is true with one exception; you can't create a function without naming it. This is true, but irrelevant. The point of lambdas is to create anonymous functions. Using named functions defeats the purpose; now a name exists that shouldn't and might be used inappropriately.
People often mention the Global Interpreter Lock and lack of tail-recursion optimization as problems with Python. These aren't problems; performance tests show that thread-switching without the GIL hurts performance more than having it. This will change eventually as processors go multi-core, and when it does I am sure the GIL will be removed, but right now the GIL is as it should be. Lack of tail-recursion optimization is irrelevant. Python is an interpreted language and many optimizations don't happen. The one case where it might be relevant is in the case of infinite recursion, where the unoptimized tail recursion causes a stack overflow. However, in my opinion this should be a criticism of languages that do tail recursion optimization; this causes nearly identical recursive functions to behave differently after optimization, breaking optimization's most basic rule.