Hunting Performance in Python Code – Part 4. CPU Profiling (interpreter)

In this post I will talk about some tools and ways in which you can profile the interpreter, when running a Python script.

Just like in our previous post, the meaning of CPU profiling is the same, but now we are not targeting the Python script. Instead we want to know how the Python interpreter works and where it spends the most time when running our Python script.

We see next how you can trace the CPU usage and find the hotspots in our interpreter.

Read More »

Advertisements

Implementing sendmsg and recvmsg for PyPy

Sendmsg and recvmsg are two system calls which allow to send messages on a socket as one could by using send/recv or sendto/recvfrom with a few noticeable differences:

  • Extra information can be passed in a packet on a socket that is not considered part of the message. This is known as ancillary data or control information.
  • Sendmsg and recvmsg can be used on both connected or unconnected sockets (if the protocol allows) due to the option to specify the address similar to how sendto/recvfrom work.
  • The ancillary data allowed through the socket differs from one socket type to another. For example, on Unix socket sendmsg and recvmsg can use ancillary to pass file descriptors from one process to another, where as on UDPv6 they can allow for extra information regarding the packet (such as IPV6_PKTINFO​) to be attached in the ancillary.

Read More »

Hunting Performance in Python Code – Part 3. CPU Profiling (scripts)

In this post I will talk about some tools that can help us solve another painful problem in Python: profiling CPU usage.

CPU profiling means measuring the performance of our code by analyzing the way CPU executes the code. This translates to finding the hot spots in our code and see how we can deal with them.

We will see next how you can trace the CPU usage used by your Python scripts. We will focus on the following profilers (click them to go to the corresponding section in this blog):

Read More »

Hunting Performance in Python Code – Part 2. Measuring Memory Consumption

In this post I will talk about some tools that can help us solve a painful problem in Python, especially when using PyPy: memory consumption.

Why are we concerned with this in the first place? Why don’t we care only about performance? The answer to these questions is rather complex, but I’ll summarize it.

PyPy is an alternative Python interpreter, that features some great advantages over CPython: speed (through it’s Just in Time compiler), compatibility (it is almost a drop in replacement of CPython) and concurrency (using stackless and greenlets).

One downside of PyPy is that in general it uses more memory than CPython, due to it’s JIT  and garbage collector implementation. Nevertheless, in some cases, it is able to use less memory than CPython.

We will see next how you can measure the amount of memory used by your application.

Read More »

Enabling Profile Guided Optimizations for PyPy

PyPy, compared to CPython relies more on achieving speed-up by “jitting” code as often as possible, rather than rely on its interpreter. However, jitting is not always an option, or at least not entirely. A good improvement for CPython, that we think might benefit PyPy as well, without impacting the JIT performance is Profile Guided Optimization (PGO or profopt).

I thank the PyPy developer community for their patience, kind advice and constant feedback they gave me in #pypy IRC or through email, which helped me to make this possible, especially to Carl Friedrich Bolz-Tereick and Armin Rigo.

Read More »