Friday, February 23, 2018

Mu, the little #python editor that could.

Nicholas and the Mu gang have been busy with their python editor, and it now supports pgzero.


Mu is a simple code editor for beginner programmers.


I'm pretty excited about this. Here's a little demo of it in action with pgzero...

Whilst it's still in 'beta' with rough edges... it's pretty neat already. Also, they seem to be quickly moving through issues people in the community encounter with each beta release.

It's been super-lovely to see the python community get behind them and offer their support.

Stop idling, and go have a look: https://github.com/mu-editor/mu

pygame documentation updates

There are some documentation updates on https://www.pygame.org/docs/


The website documentation builder was waiting for updates from bitbucket(our previous code host). lol? oops. Robots running in the cloud doing things no one wants them to do anymore. I had to write a new github integration, so now commits to master on github/pygame/pygame will cause the website docs to be rebuilt automatically again.

And python -m pygame.docs didn't work with the wheel builds... because they don't include docs (for some reason). So now, if the docs can't be found locally, it opens the web browser instead.
There were a bunch of documentation updates from Ian Mallett and Lenard Lindstrom which are finally up on the website. Lots of editing, and improvements to the tutorials.
Additionally a bunch of old links were fixed. Mostly to point to https:// versions of pages.
The docs are being built with a new version of Sphinx, which has nicer output in a few ways. See http://www.sphinx-doc.org/en/master/changes.html#release-1-7-0-released-feb-12-2018

Since it was a major version upgrade of Sphinx, there were some small expected issues to deal with, including their python API being broken/deprecated (just call the command line program instead), and disabling the on-by-default smart quotes(which are terrible for people copy pasting code examples).

Also, the launchpad PPA is building again. It got stuck because someone snuck a gpg field in their git commit, which broke the bzr mirror code. Except the packaging for that ppa is from 2013... and needs updating. Luckily it seems a few people have done the packaging work in various flavours of Debian... but somehow none of them have gone through. But anyway... The badge is getting made again(displayed on the readme) with the github webhook, and will alert us if things break on ubuntu/i386/amd64/arm7f.

Speaking of badges... there's now a 'commits since last release' badge on the https://github.com/pygame/pygame Which links to a diff between the last release.

Finally, I updated all the old bitbucket wiki pages to point to the same link on the pygame wiki (https://www.pygame.org/wiki). eg, https://bitbucket.org/pygame/pygame/wiki/CookBook points to https://www.pygame.org/wiki/CookBook

Thursday, February 15, 2018

Hey! It's work on pygame stuff week.

So, it's been about 9 days since I had "a fun day working on pygame stuff". That morning I woke up, and just started working on pygame things. This is a pattern with me between big freelance contracts. Last year I spent some months on pygame stuff, and also the year before that some months.

It was such a fun day... I just kept going. And here we are 9 days later with a web log of some of the things that happened.
A fun 9 days working on pygame stuff.

New pygame.org website changes.

Got a new version of the pygame website out. It took a couple of days, but I fixed a number of issues, and improved a few things. In the process I found a lot more issues on the website than I fixed. So there are now more issues listed than when I started. Feels like progress.

"it's the schrödinger's bugtracker" -- þeshipu

pygameweb 0.0.1 - Weedy Seadragon

Witness the beauty of the Weedy Seadragon (photo by

#30 https url scheme default(not in DEBUG). https login links even on http.
#29 News feeds (rss, atom) are in the header of every page again.
#28 Improved table of contents on the wiki. Syntax help on edit page. Clear cache after edit.
#27 Changes are now shown again when people do a new project release. Project test improvements.
#31 /admin turned off by default, APP_ADMIN config variable.
#13 Admin interface improvements

"Are pygame releases going to have code names too? :-)" I guess so, but maybe the code name for all of them should be "Weedy Seadragon", because it's such a magical creature.

Debian bugs.

I've investigated a number of Debian reported issues, and even managed to fix some of them. It never ceases to amaze me that testing on other systems uncovers real bugs which also happen on other platforms (but in harder to detect ways usually). So, for me, it's always worth working on weird platforms - if only for this reason. https://github.com/pygame/pygame/pull/391

Thanks to Dominik for bringing them into the pygame issue tracker, and for improving the Debian pygame package generally.

It's important to get these fixes in, as I think they are blocking a Debian release of pygame 1.9.3. ... which also blocks derivatives such as Raspberry Pi rasperian distro (who don't provide any support to pygame, or Debian, and just take from Debian packages mostly). Another important derivative is Ubuntu, which also hasn't updated their pygame package.

Here is the Debian tracker for the pygame package, which lists things that need to be done. If anyone can spot something I could help out on there, please feel free to point it out - as I have some days now to investigate and fix things. https://tracker.debian.org/pkg/pygame

Here is the build status on the various Debian platforms:

Thanks to the OpenPOWER foundation and the University of Campinas who provide Power virtual machines to open source developers. It still feels magical to me when I log into a box on another continent to shell-about.

So, I got some Power PC virtual machines for testing. I fixed a few pygame on Debian bugs, and wrote on their issues telling them they are fixed now. Fingers crossed a newer pygame package will make it into Debian, Ubuntu and Raspberian.


Outdated installation instructions from around the net.

TLDR; If you see anywhere on the net that has outdated pygame install/compile instructions, please let them know, and send them a link to GettingStarted.

One thing I've noticed is that there are quite a few web pages on the interwebs describing 'how to install pygame'. Which were all helpful in the past, but are now out of date. The pygame website itself is one place with outdated instructions too. Now they just cause people to fail to install old versions of pygame, and for them to report the same old bugs again and again.

I'm asking people to:
  1. Link to the Getting Started guide, which should have tested installation methods for many platform/python combinations: https://www.pygame.org/wiki/GettingStarted
  2. Update instructions that tell people to install from the outdated bitbucket repository (we moved to github).
I've been updating the pygame website itself to follow this advice in parts where it wasn't current.
Also soon, the bitbucket repo setup.py will get a message added to it:
In the coming months I'll make that bitbucket repo error, with the same instructions.

Additionally, I'll add a message on compile failure to link to the relevant Compile* page.
Eg, if there's a failure on a Ubuntu machine it will link to the compile page for Ubuntu. https://www.pygame.org/wiki/CompileUbuntu

For all the compile pages, they will be updated to add the versions they were confirmed working (luckily, this info is sort of stored in the wiki version history). Old versions of Ubuntu require different compilation advice to the latest Ubuntu versions. Latest version instructions will be up the top of the page, and old versions underneath. I'll also gather links from around the internet, because sometimes better compilation instructions are listed on peoples blog posts than in the wiki. All the Compile pages are listed here: https://www.pygame.org/wiki/Compilation Further development/contributing instructions are listed here: https://www.pygame.org/wiki/Hacking Hopefully giving the Compile* pages some more structure, they will become better in the future.

Python 3.7.0b1

Some python 3.7.0b1 issues were fixed, and I got it working on Travis CI with linux. There is still work to try it out on Windows, and Mac, and to make binary wheels for for all the platforms.

Are we PyPy yet?

It seems various people have been working on making the PyPy CPython C API handling improve a lot. As well as fixing issues here and there with pygame when compiled with pypy.

Wow. Many thanks to all the people working on that.


Now it's at a stage where a bunch of things actually work. Not enough to play a game on it... but getting much closer.

Learning this good news... I got PyPy building on Travis CI/pygame/pygame (with some tests still failing). Then started a milestone for "Are we PyPy yet?". Stuart Axon has already filed one issue, and asked PyPy developers for help on how to fix it. Stuart wasn't joking saying PyPy devs are super helpful.

This one particular issue has to do with surface locking and reference counting. In CPython when you do "del array" it does what you think... and deletes the array right then and there. However, PyPy has a different type of memory management, and PyPy will decide when to delete things thank you very much!

Like with files in python, it was suggested we gain a method like '.close()' to explicitly deal with these locks. Instead of relying on reference counting semantics. Also, to use a context manager for with.

PixelArray is using Surface locks. It does this so if your Surface is graphics hardware memory underneath your operating system won't crash on you (I hope).
Leave the pixels alone!!!! What did they ever do to you?!?

Next up is to start filing pygame issues for each of the test failures and then slowly fixing them. Some of them should be fairly easy, and others are downright scary (BufferProxy and no ctypes.pythonapi on PyPy makes me afraid... very afraid).

Then we have to figure out how to build binary wheels for the various platforms, and probably many other things... All PyPy related work will be tracked on the Are we PyPy yet? milestone in the pygame issue tracker. Not sure how long this work will take, so this won't be included in the upcoming 1.9.4 release I don't think.

The mysteriously weird 'moiré pattern'.

I was clicking-about on stack overflow and came across this weird thing in popular question there. This strange thing happens when drawing thick circles with pygame.draw.

Before: moiré pattern sort of looks nice

After: just a boring filled bit of red. Nothing interesting. Move along.
That was sort of a fun one to hack a fix for.

I love Pulseaudio. And sarcasm.

I have much respect for the pulseaudio developers... linux audio is hard work.

There's a couple of issues with sound and pygame on linux with the 'manylinux' binary wheels we have. It took me some time to track the causes down, and find ways to reproduce the issues.

TLDR; midi music is broken with pygame installed by wheels.

One has to do with arch linux using a different configuration directory to every other linux of the last 20 years for timidity. And also to do with how we are not installing special config files on random parts of the linux file systems when people install the wheel with pip.

So, there are a few different software music synthesizers included with pygame. One is timidity (linked above) and the other is fluidsynth. The soundfont playing synth.

"Had enough with linux audio issues for today. I've got a sneaking feeling that pulseaudio and fluidsynth are to blame somehow for high cpu on linux systems when installing from pip wheels. https://github.com/pygame/pygame/issues/331#issuecomment-365177184"

So, I just sort of paused at that point. I guess we need to disable timidity (or patch it for arch linux), and for the case when timidity isn't installed. Also, we need to handle soundfonts... either provide a message to the user if they are missing, or see if there are some FLOSS-happy ones we can distribute just on linux.

ps. During these investigations, I found out that fluidsynth has been saved from sourceforge and has had a whole bunch of releases. Awesome. http://www.fluidsynth.org/

Updating old links again...

Another pleasant delight whilst searching-around-the-interwebs... I found a patch logged against an old mirror of the bitbucket repo on github. This had a few really good documentation updates, and an addition of a 'pitch_bend' method... which allows you to do pitch bends, as the name suggests. Luckily @endolith (a hardcore Electronics and DSP engineer doing interesting stuff) made a new PR, and so those improvements will finally make it out into the world. Somehow I overlooked it on bitbucket too.

Additionally, a couple of new issues were filed against out bitbucket repo... which I haven't closed down yet(because reasons). They are for a couple of SRC_ALPHA regressions it seems. I'll add them into github soon, and also then to the list of blockers for a 1.9.4 release.

1.9.4 release planning.

"About time for another release I guess?"
About 5 days ago I asked the mailing list if we could do a release, and that I intended to do a release. To give people time to mention issues, do testing, and let me know if they are still working on things to release that we should wait for.

Which is nice because people let me know about various issues they'd liked fixed and pointed me to the details (including the SRC_ALPHA issues above). I started looking around for things to merge in, looking at other bug trackers (like the Debian one).
Then, I started a checklist of things to do for a pygame release, and also started filing issues with the correct labels and using a milestone.  https://github.com/pygame/pygame/issues/390

I want to get this work out so I can start merging in SDL2 stuff into the master branch. That's what I'm really looking forward to.


A new place to idle the day away chat.

We have a pygame discord server for anyone who wants to chat about pygame projects.

/me sheds a tear for irc.

Feels like progress. 

Every time I look at that bug list it gets bigger! Schrödinger has a lot to answer for.
Stop messing about on your phone and get to work!
It was a fun 9 days in floss. Gonna do more!


Tuesday, February 06, 2018

Hey! It's work on pygame stuff day.

Yasssssssss! Today was work-on-pygame-stuff day.


ROTFL. ASCII renderer in action.

So here is a log of pygame (The game library for python) updates.
  • Removed a s.p.aaaam infestation from the pygame.org website. Planned next round of improvements for spam fighting.
  • Went over issue tracker adding labels, and replying to things. Thinking about, and closing a few issues. Felt a bit bad about closing some issues, mixed with a bit of relief. Shoulders felt a tiny bit lighter.
  • Set up a new linux box for doing pygame stuff with.
  • Mucked around with pypy nightly.
  • Messed with the python beta. Ya to breakpoint(). No real big breaking changes to the C API uncovered yet. Read the TLS API changes. Thought to myself that latency, and low memory benchmarks would be a good thing to submit. time.time_ns is useful but still a workaround for using floats for time, and there is code out in the wild already using a smaller measurement of time than ns. 1GHz is a nanosecond, and 5Ghz CPUs have been around for more than a decade, so it's already out of date when the API was released. Still sort of awesome.
  • Applied for a 64bit Power PC developer virtual machine to help me fix several bugs reported by some fine people from Debian. Got given cloud access. Delicious Power box for compiling. I see bit fiddling 64bit big endian fixes in my future.
  • Did a code review of a chip tunes synth project. Got a little bit too excited. Caught up a bit with some internet friends/acquiescences who lurk in the odd parts of the net.
  • Added some files that github now requires. Before it had some red crosses, and now with these files there are green ticks. Feels like progress.
  • Answered a bunch of newbie questions.
  • Looked around at what is going on. Too much to look at really (oh, two more books... blog posts, ... arggggg tooo muuuuch). Joined up on gitlab, joined a few discord servers, got confused about discourse, discord, and discuss. 1789 unread emails, 64 twitter notifications. Slack, gitter, mastadon... noooooooo. Info overload. Lurked on an irc channel, and didn't even say hi, and no one even said hi back, and that is a perfect day on irc. The best part about irc is that people don't write to you.
  • Did some server admin work. "Let's see what I caught in my spam traps" *look of glee*, updating packages, grepping logs, using vi. Killed some daemons.
  • Found some eco renewable power servers. Considered starting on that migration... not today.
  • Looked at some assembly code for the first time in maybe 12 months. Downloaded some syntax highlighting updates for said file. Closed file.
  • Pulled out some test hardware and arranged it in my studio. Plugged some of it in. (a 24bit sound card, wacom tablet, vibration joypads, android and apple phones+tablets, a fist full of raspberrypi, the shittiest windows laptop, the oldest linux laptop, screens, screens, and more screens). Realised I needed more plants, because now it's a jungle of wires.
  • Decided I should keep a web log of this work.
  • Caught up on some changes to GLSL.

Oh gosh. There's lots of work to do... 
Time to pull up my big boy pants and work.

Did some code review, and merged a bunch of changes that some fine people submitted:
Made some plans for how to progress on the sdl2 branches. I see lots of merging in my future.
  • rebase sdl2 branch ontop of master
  • bring all files back to current state on master (that is SDL1 branch is buildable with no SDL2 files).
  • So now all files have the SDL2 stuff in history.
  • Introduce one file at a time with combined SDL1+SDL2 module.
  • At each step the SDL1 branch should be releasable.
  • Convert another module to SDL1+SDL2 in the SDL2 branch.
There's a bunch of testing in between those steps. Eventually we will get to a stage where the SDL2 branch will build as well on the build robots.

Realised my local build machines finished compiling and running tests before the remote CI machines had even started.

Decided it was time for a new 1.9.x release soon.


Today felt like spring. I sat on my balcony with the sun on my face drinking a coffee and eating an Italian strawberry. It was 0 degrees Celsius, but even that felt refreshing.

A good day in FLOSS.

Friday, July 07, 2017

SA battery thoughts

There's this worlds largest battery project going on in South Australia.


Tesla sold a financial product. The government needed to cover some of the risk of when power lines go down, or if something goes wrong in another part of the network. It needs to cover this risk quickly for political reasons.
The chance of another similar storm knocking out the power lines again, which now have bigger maintenance crews, is very small. But if there is another blackout and they didn't do anything? They'd be in big trouble with the newspapers.

So Tesla really sold a risk product. Since SA could have spent the money on more generation. But not all people understand that.

The 100MW stage of that wind farm cost $250 million, and took some years to reach agreement, and some years to build. By promising to build the battery quicker, they have covered that risk during the time they need it.

They could have installed another wind farm the same size in a different part of the state in order to reduce the risk, and fill up the valleys of power generation. That has been proven to work too, and the benefit is you have more power generation in the peaks.

The cost of the blackout was estimated to have cost $367 million to business. 12% of the businesses had backup power generators themselves, and about a third of the businesses had bought insurance for such situations. Life critical systems are required to have independent backup power.

By the time the battery is built there will probably be a similar amount of solar power installed as the battery (by current rates of installation). There's 2,034 MW of industrial solar being constructed in Australia for 2017. This doesn't include stuff going onto roofs of houses, of which there are millions of houses already covered and more being done. Solar installed in Australia can be done for $5,000AUD or less for a 5KW system on a house(1.25% of the median house price in SA, or 75% of the average monthly household income in SA for one month). That's $100 million AUD for 100MW on 20,000 homes installed at retail prices. Since the blackout happened, way more than 100MW of solar power has come online already.



The group that runs the grid predicts that by 2023 the entire state could be powered by rooftop solar.


There's also a lead smelter which is being upgraded, so it will have modern equipment which lets it use power more dynamically... effectively making it a battery. It can take in power, or not, as it needs. They can also shut down their power hungry desalination plant if needed (which they don't really need when there is not a drought). Additionally there is an extra interstate power connector also running, which was down during the storm for upgrades.

So now they have a backup battery in the works, a backup gas power plant in the works, backup power lines to another state, more efficient industrial power users, and hundreds of thousands of small independent solar power generators.

They've definitely covered their arses.

Friday, March 31, 2017

Data aware jit/blit - drawing 1.25 to 1.45 times faster.

Drawing different types of pixels can be quicker if you know about the image you are drawing, and if you know that drawing parts of the image with specialised blitters is quicker.

A good example is if your image is 25% areas of large blocks of either white or black. Using a specialised fill routine to just draw those big blocks of color is lots quicker. This is because there is usually an optimised, and hardware accelerated fill routine.

See all this whitespace? (all the white parts on your screen) These can be drawn really quickly with ASIC fill hardware rather than a slow GPU running a general purpose image blitter.

Another example is like this Alien image. The edges of the image are transparent, but the middle has no transparency. Since drawing transparent images is slow, using a different drawing routine for the middle part than the edges turns out to be faster.

Alien graphic used in Pygame Zero teaching framework documentation.
 
Here is a proof of concept which draws an image used by pygame zero in 80% of the time it normally takes. That is about 1.25 times quicker.
https://github.com/illume/dataaware

Alien sectioned up, drawn with 5 different blitters, each perfect for the section.

The results vary dramatically depending on the image itself. But the 1.25 times faster is fairly representative of transparent images where the middle part isn't. If it finds sections where the image is a plain colour, that can be 1.42 times faster. Or more. Larger images give you different results as does different hardware. Obviously a platform with a fast path hardware accelerated image fills, or 16 bit image rendering but slow 32bit alpha transparency is going to get a lot bigger speedups with this technique.

Further work is to develop a range of image classifiers for common situations like this, which return custom blitters depending on the image data, and the hardware which it is running on.

(this is one of several techniques I'm working on for drawing things more quickly on slow computers)

Thursday, March 30, 2017

Four new pygame things for slow computers.

There's four things I'd like to work on for faster pygame apps on slower computers (like the Raspberry Pi/Orange Pi).
  • Dirty rect optimizations, to speed up naieve drawing.
  • SDL2, for better hardware support.
  • C/Cython versions of pygame.sprite
  • Surface.blits, a batching API for drawing many images at once.
The general idea with all of these is to take techniques which have proven to improve the performance of real apps on slower computers. Even though, for some of them there are still open questions, I think they are worth pursuing.  Even though more advanced techniques can be used by people to work around these issues, this should be fast even if people do things the easy way on slower computers.

Anyway... this is a summary of the discussions and research on what to do, and a declaration of intent.


Dirty Rect optimizations.

First a couple of definitions. "Dirty Rect" is a technique in graphics where you only update the parts that changed (the dirty parts), and rect is a rectangle encompassing the area that is drawn.

We already have plenty of code for doing this is pretty fantastic ways. LayeredDirty is a particularly good example, which includes overlapping layers and automatically deciding on rendering technique based on what sort of things you're drawing. However, when people just update the whole screen like in pygame zero (the project which has a mission to make things simple for newbies), then there can be performance issues. This change is aimed at making those apps work more quickly without them needing to do anything extra themselves.

So, on to the technique...

Because rectangles can overlap, it's possible to reduce the amount drawing done when using them for dirty rect updating. If there's two rectangles overlapping, then we don't need to over draw the overlapping area twice. Normally the dirty rect update algorithm used just makes a bigger rectangle which contains two of the smaller rectangles. But that's not optimal.

But, as with all over draw algorithms, can it be done fast enough to be worth it?

Here's an article on the topic:
    http://gandraxa.com/detect_overlapping_subrectangles.xml

jmm0 made some code here:
    https://bitbucket.org/jmm0/optimize_dirty_rects/src/c2affd5450b57fc142c919de10e530e367306222/optimize_dirty_rects.py?at=default&fileviewer=file-view-default

DR0ID, also did some with tests and faster code...
https://bitbucket.org/dr0id/pyweek-games/src/1a9943ebadc6e2102db0457d17ca3e6025f6ca60/pyweek19-2014-10/practice/dirtyrects/?at=default


So far DR0ID says it's not really fast enough to be worth it in python. However, there are opportunities to improve it. Perhaps a more optimal algorithm, or one which uses C or Cython.

"worst case szenario with 2000 rects it takes ~0.31299996376 seconds"
If it's 20x faster in C, then that gets down to 0.016666. Still too slow perhaps, but maybe not.

For our use case, there might only be 2-10 things drawn. Which seems way under the worst case scenario for performance. Additionally we can use some heuristics to turn it off when it is not worth doing. Like when the whole screen is going to be updated anyway, we don't do it.

Below is one of the cases that benefits the most.
Currently the update method combines two rects like this:
_______
|     |
|   __|____
|___|_|   |
    |     |
    |     |
    |_____|

Into a big rect like this:
___________
|     ####|
|     ####|
|   XX    | # - updated, but no need.
|###      | X - over drawn.
|###      |
|###______|

Note in these crude diagrams the # area is drawn even though it doesn't need to be, and the XX is over draw. When there are some overlapping rects like this that are large, that can be quite a saving of pixels not needed to be drawn. But what we want is three rects, which do not give us overdraw and do not needlessly update pixels which we do not need to.

_______
|     |
|_____|____
|___|     |
    |     |
    |     |
    |_____|

This can easily save having to draw millions of pixels. But can it be done fast enough, with the right heuristics to be useful?

For a lot of people, and apps, this won't be useful. But I hope it will for those trying to draw things for the first time on slow computers.


There's some more discussion on the pygame mailing list about it, drawbacks, and links to old school Apple region rendering techniques.


SDL 2 hardware support

Version 2 of SDL contains hardware support which makes it faster on some platforms (for some types of apps). This includes the Raspberry PI.


There are actually a few different modules that already have implemented a pygame API subset using SDL2. Which shows me compatibility is important to some.

The approach I want to try going forward is to use a single source SDL1, and SDL2 code base with a compile time option. (like we have single source py2 and py3). There are new SDL2 APIs for hardware acceleration, which can be added later in a way which fits in nicely with our existing API.

Lennard Linstrom has made patches for pygame using SDL2 available here for some years: https://bitbucket.org/llindstrom/pygame-1.10-patch

The first step is to do an ifdef build for linux, and then do some more testing to confirm areas of compatibility and that the approach is ok.

Additionally we agreed that using Cython is a good idea.


There's quite a long discussion on the pygame mailing list, with more to it than this. But this is the general idea.


C/Cython versions of pygame.sprite

Sprites are high level objects which represent game objects. Graphics with logic.

I've already mentioned things like LayeredDirty, which does all sorts of back ground optimizations and scene management for you.

A lot of this code is in python, and could do with some speed ups. Things like collision detection, and drawing lots of objects are not always bottlenecked by blitting. We've known this ever since the psyco python jit came out. There are other techniques to work around these things, like using something like pymunk for physics, or using a tile based renderer, or using a fast particle renderer. However people still use sprites, for ease of use mainly.

So the plan is to compile them with Pyrex...  err... I mean Cython, and see what sort of improvements we can get that way. I expect naieve collision detection, and drawing will get speed ups.



Surface.blits, a batching API for drawing.

Another area that has been proven to speed up games is by creating a batching API for drawing images. Rather than draw each one at a time, draw them all at once. This way you can avoid the overhead of repeatedly locking the display surface, and you can avoid lots of expensive python function calls and memory allocations.

The proposed names over the years have included blit_list, blits, blit_mult, blit_many call...

I feel we should call it "Surface.blits". To go with drawline/drawlines.
It would take a sequence of tuples which match up with the blit arguments.

The original Surface.blit API.
    http://www.pygame.org/docs/ref/surface.html#pygame.Surface.blit

  blit(source, dest, area=None, special_flags = 0) -> Rect

The new blits API.
    blits(args) -> [rects]
    args = [(source: Surface, dest: Rect, area: Rect = None, special_flags: int = 0), ...]
    Draws a sequence of Surfaces onto this Surface...

    :Example:
    >>> surf.blits([(source, dest),
                  
(source, dest),
                  
(source, dest, area),
                  
(source, dest, area, BLEND_ADD)]
    [Rect(), Rect(), Rect(), Rect()]


One potential option...
  • Have a return_rects=False argument, where if you pass it, then it can return None instead of a list of rects. This way you avoid allocating a list, and all the rects inside it. I'll benchmark this to see if it's worth it -- but I have a feeling all those allocations will be significant. But some people don't track updates, so allocating the rects is not worth it for them. eg. the implementation from Leif doesn't return rects.

It can handle these use cases:
  • blit many different surfaces to one surface (like the screen)
  • blit one surface many times to one surface.
  • when you don't care about rects, it doesn't allocate them.
  • when you do care about update tracking, it can track them.
It can *not* handle (but would anyone care?):
  • blit many surfaces, to many other surfaces.
Areas not included in the scope of this:
  • This could be used by two sprite groups quite easily (Group, RenderUpdates). But I think it's worth trying to compile the sprite groups with Cython instead, as a separate piece of work.
  • Multi processing. It should be possible to use this API to build a multi process blitter. However, this is not addressed in this work. The Surface we are blitting onto could be split up into numberOfCore tiles, and rendered that way. This is classic tile rendering, and nothing in this API stops an implementation of this later.

There's an example implementation by Leif Theden here:
https://gist.github.com/bitcraft/1785face7c5684916cde


There is always a trade off between making the API fast, and simple enough to be used in lots of different use cases. I feel this API does that. But more benchmarking and research needs to be done first.

There's some discussion of the blits proposal on the pygame mailing list.


Other performance related things.

There was also some discussion of using the quad CPUs on the newer raspberry PI, which are faster than the video hardware on there for some tasks. Unfortunately that won't help the millions of older ones, and even newer ones like the new zero model. I've seen those CPUs outperform the hardware jpeg, and also when used for image classification. However, like taking advantage of METH_FASTCALL in python 3.6, that will have to wait for another time. Separately there's also work going on by other people on other optimization topics. An interesting one is the libjit based graphics blit accelerator.

Thursday, March 23, 2017

pip is broken

Help?

Since asking people to use pip to install things, I get a lot of feedback on pip not working. Feedback like this.

"Our fun packaging Jargon"


What is a pip? What's it for? It's not built into python?  It's the almost-default and almost-standard tool for installing python code. Pip almost works a lot of the time. You install things from pypi. I should download pypy? No, pee why, pee eye. The cheeseshop. You're weird. Just call it pee why pee eye. But why is it called pip? I don't know.

"Feedback like this."

pip is broken on the raspberian

pip3 doesn't exist on windows

People have an old pip. Old pip doesn't support wheels. What are wheels? It's a cute bit of jargon to mean a zip file with python code in it structured in a nice way. I heard about eggs... tell me about eggs? Well, eggs are another zip file with python code in it. Used mainly by easy_install. Easy install? Let's use that, this is all too much.

The pip executable or script is for python 2, and they are using python 3.

pip is for a system python, and they have another python installed. How did they install that python? Which of the several pythons did they install? Maybe if they install another python it will work this time.

It's not working one time and they think that sudo will fix things. And now certain files can't be updated without sudo. However, now they have forgotten that sudo exists.

"pip lets you run it with sudo, without warning."


pip doesn't tell them which python it is installing for. But I installed it! Yes you did. But which version of python, and into which virtualenv? Let's use these cryptic commands to try and find out...

pip doesn't install things atomically, so if there is a failed install, things break. If pip was a database (it is)...

Virtual environments work if you use python -m venv, but not virtualenv. Or some times it's the other way around. If you have the right packages installed on Debian, and Ubuntu... because they don't install virtualenv by default.

What do you mean I can't rename my virtualenv folder? I can't move it to another place on my Desktop?

pip installs things into global places by default.

"Globals by default."


Why are packages still installed globally by default?

"So what works currently most of the time?"


python3 -m venv anenv
. ./anenv/bin/activate
pip install pip --upgrade
pip install pygame


This is not ideal. It doesn't work on windows. It doesn't work on Ubuntu. It makes some text editors crash (because virtualenvs have so many files they get sick). It confuses test discovery (because for some reason they don't know about virtual environments still and try to test random packages you have installed). You have to know about virtualenv, about pip, about running things with modules, about environment variables, and system paths. You have to know that at the beginning. Before you know anything at all.

Is there even one set of instructions where people can have a new environment, and install something? Install something in a way that it might not break their other applications? In a way which won't cause them harm? Please let me know the magic words?

I just tell people `pip install pygame`. Even though I know it doesn't work. And can't work. By design. I tell them to do that, because it's probably the best we got. And pip keeps getting better. And one day it will be even better.

Help? Let's fix this.

Tuesday, March 14, 2017

Comments on pygame.org community

Some notes about the current state of comments, and thoughts about future plans are below.

0) Spam.

So far there hasn't been comment spam on the new comment system(yet!)... but eventually some will get through the different layers of defense. Which are a web app firewall (through cloudflare) (which helps block bots and abusive proxy servers), user signups required, limits on the number of accounts and comments that can be posted per hour, making the spam useless for SEO(nofollow on links) and then a spam classifier.

The spam classifier is pretty basic. It only uses the message text so far, and not other features like 'how old is the poster account', or 'does the user account have another other accounts linked'. Despite that, and only having been trained on a few thousand messages it seems classification works pretty well. It scores 0.97 for ham, and 0.76 for spam when it is cross validated on the test set.

It's sort of weird having source code available for a website, and also trying to ward off spammers. Because if they looked, they can see exactly the measures taken to prevent spam. People who are dedicated to it will be able to easily spam, but casual and automated spam should be able to be stopped.

We used to have a 'grey listing' style account signup, where people could only sign up with a secret link. Whilst this worked ok, it also made it quite challenging. You needed to reach out to the community, or know someone who was already in it. This really did reduce the amount of spam though.

Disqus (a service) commenting was removed, and comments imported from there. This was because they added advertising without getting consent(which I received a lot of complaints about). Additionally we didn't have much control over managing the comments in a way which more suited our community (more on this below).
Gravatars are being used for avatars. There's no profile image for the website itself.

What's left to do with comments...

1) Doc comments

The old "doccomments" need to be moved into the new comment system. This is because the documentation lives in static files and is not produced by our website. Additionally, you can have multiple comments on a single page. Then they need quite some moderation for spam and abuse.

2) Better moderator tools

Adding spam/unspam links for moderators to quickly classify something. Also a list of recent comments that need moderation. The current system for this is really quite clunky.
The aim is to really reduce the work needed to be done by moderators.
Moderating internet comments is soul destroying work... so let's make robots do it.

 

3) Optionally disabling comments on projects

After some discussions I've decided to add an option for projects to disable comments. This way people don't have to deal with unwanted silly criticism if they don't want to. So, if someone is ready to get feedback for a project they can turn on comments. If they just want to show people what they've done (and perhaps get feedback from their own circle of friends) then they can leave the comments off. There's been a number of people who got some really weird demands, abuse, and other unsavoury comment behaviour... and just quit their projects. eg. this is one project which quit https://www.pygame.org/project/182 Another person removed more than 20 projects after getting some nasty comments from anonymous strangers on the internet.

4) Comments only from other makers

Additionally, I think it might be a good idea to only allow people who have posted a project to post comments on other peoples projects. This will stop the drive by trolls, and make comments more a discussion amongst peers. Gathering useful feedback, and having constructive criticism is a great thing. I guess this will be an optional thing as well, since whilst feedback from peers is often of a higher quality, hearing from others is also very useful.
[comments allowed options]
   - no comments on this project
   - comments from other project owners only
   - comments from everyone with an account.

This will also be a nice signal that the pygame community is about making things, and that we place importance on making things.
"If you want to comment on this project, you first have to share a project of your own".

5) Reactions, ratings, awards, and stars/favourites

The ludumdare, and pyweek systems have multiple ratings for different aspects of a project. They ask for feedback on particular things. Sound, fun, innovation, production... etc. So I'd like to store those for comments.

Again, this will be optional for projects. Each will have a [Seek feedback.] option.
Feedback like this will make giving more useful comments easier.

Additionally, a 'didn't work for me' option people can click can let people provide that feedback easily without polluting the comments too much.
[didn't work for me] [on which OS][stack trace]
Whilst pointing out defects is useful, it can also tend towards annoying nitpicking and turn into unwanted bikeshed arguments. It also can get in the way of more long form thoughtful discussion.
Awards are fun too(as used in ludumdare/pyweek), like "best duck main character".

Favourites, and stars are useful for keeping a list of ones you personally are interested in. They're useful for following projects. Also for "which projects do other people like".

6) Social auth logins

I've also added fields to projects and user profiles for linking up your twitter username, your bitbucket, and github urls. It's useful to know github/bitbucket links for projects. This allows downloading change information from there, and even releases. Much like how the pygame community dashboard brings information in from dozens of different social platforms, I want to allow projects to have that too.

More importantly, people who want to form teams or work on their projects with others will be able to ask for contributors (or even know where to find the project!)

Allowing people to just enter their github/bitbucket/twitter/etc user names means they don't need to link their accounts for signup. However, letting people use these will allow people to join more quickly. For those truly too lazy to enter in an email address ;)

7) Putting the Python Code of Conduct in front

Putting the Python Code of Conduct in front is another conscious decision. Which in short says to respect each other, and don't be mean. It says the whole python community, along with the pygame community expects to be able to participate in a friendly constructive manner. So it's right there on the front page.
"Leave a thoughtful comment"
The messaging, and branding also tries to suggest people to be thoughtful. Rather than have a "submit comment" button we have a "leave thoughtful comment" button. It's a little thing, but hopefully it signals to people that they should play nice.

Multi coloured branding

8) How to write good criticism?

I'd like to be able to point people to articles on how to do good criticism of both software, and of arts projects. What makes good feedback? What makes a good review?

Is the purpose of a review to nitpick? Is it to help energize people, to recognize people for their work?

Articles like On Giving Feedback I want to link to.
"When it was my work being critiqued, it made me excited to push my design and thinking forward."
I'd to point out reviews of a quality, as good examples. Writing reviews is an art form in itself. My time writing arts reviews really helped me working in creative fields, as much as receiving reviews. It really is a different thing to review a creative piece, compared to reviewing a purely functional piece.

Do you know any good articles on feedback and review we should share?

Monday, March 06, 2017

Pixel perfect collision detection in pygame with masks.

"BULLSHIT! That bullet didn't even hit me!" they cried as the space ship starts to play the destruction animation, and Player 1 life counter drops by one. Similar cries of BULLSHIT! are heard all over the world as thousands of people lose an imaginary life to imperfect collision detection every day.

Do you want random people on the internet to cry bullshit at your game? Well do ya punk?

Bounding boxes are used by many games to detect if two things collide. Either a rectangle, a circle, a box or a sphere are used as a crude way to check if two things collide. However for many games that just isn't enough. Players can see that something didn't collide, so they are going to be crying foul if you just use bounding boxes.

Pygame added fast and easy pixel perfect collision detection. So no more bullshit collisions ok?

Code to go along with this article can be found here ( https://github.com/illume/pixel_perfect_collision ).

Why rectangles aren't good enough.

Here are some screen shots of a little balloon game I made modeled after an old commodore 64 game I typed in when I was eight.  Here you can see a balloon, and a cave.  The idea is you have to move the baloon through the cave without hitting the walls.  Now if you used just bounding rectangle collisions, you will see how it would not work, and how the game would be no fun - because the rectangle(drawn in green around the balloon) would hit the sides when the balloon didn't really hit the sides.


You can download the balloon mini game code to have a look at with this article.

How is pixel perfect collision detection done? Masks.

Instead of using 8-32bits per pixel, pygames masks use only 1 bit per pixel. This makes it very quick to check for collisions. As you can compare 32 pixels with one integer compare. Masks use bounding box collision first - to speed things up.
Even though bounding boxes are a crude approximation for collisions, they are faster than using bitmasks. So pygame first does a check to see if the rectangles collide - then if the rectangles do collide, only then does it check to see if the pixels collide.

How to use pixel perfect collision detection in pygame?

There are a couple of ways you can use pixel perfect collision detection with pygame.
  • Creating masks from surfaces.
  • Using the pygame.sprite classes.
You can create a mask from any surfaces with transparency.  So you load up your images normally, and then create the masks for them.
Or you can use the pygame.sprite classes, which handle some of the complexity for you.

Mask.from_surface with Alpha transparency.

By default pygame uses either color keys, or per pixel alpha values to see which parts of an image are converted into the mask.
Color keyed images have either 100% transparent or fully visible pixels. Where as per pixel alpha images have 255 levels of transparency. By default pygame uses 50% transparent pixels as on, or ones that are to collide with.
It's a good idea to pre-calcuate the mask, so you do not need to generate it every frame.

Checking if one mask overlaps another mask.

It is fairly simple to see if one mask overlaps another mask.
Say we have two masks (a and b), and also a rect for where each of the masks is.

#We calculate the offset of the second mask relative to the first mask.
offset_x = a_rect[0] - b_rect[0]
offset_y = a_rect[1] - b_rect[1]
# See if the two masks at the offset are overlapping.
overlap = a.overlap(b, (offset_x, offset_y))
if overlap:
    print "the two masks overlap!"

Pixel perfect collision detection with pygame.sprite classes.

The pygame.sprite classes are a high level way to display your images.  They provide things like collision detection, layers, groups and lots of other goodies.

Note: balloon2.py that comes with this article uses sprites with masks.

If you give your sprites a .mask attribute then they can use the built in collision detection functions that come with pygame.sprite.
class Balloon(pygame.sprite.Sprite):
    def __init__(self):
        pygame.sprite.Sprite.__init__(self) #call Sprite initializer
        self.image, self.rect = pygame.image.load("balloon.png")
        self.mask = pygame.mask.from_surface(self.image)


b1 = Balloon()
b2 = Balloon()

if pygame.sprite.spritecollide(b1, b2, False, pygame.sprite.collide_mask):
    print "sprites have collided!"

Collision response - approximate collision normal.

Once two things collide, what happens next?  Maybe one of these things...
One of the things blows up, disappears, or does a dying animation.
Both things disappear.
Both things bounce off each other.
One thing bounces, the other thing stays. If something going to be bouncing, and not just disappearing, then we need to figure out the direction the two masks collided.  This direction of collision we will call a collision normal.
Using just the masks, we can not find the exact collision normal, so we find an approximation.  Often times in games, we don't need to find an exact solution, just something that looks kind of right.
Using an offset in the x direction, and the y direction, we find the difference in overlapped areas between the two masks.  This gives us the vector (dx, dy), which we use as the collision normal.
If you understand vector maths, you can add this normal to the velocity of the first moving object, and subtract it from the other moving object.

def collision_normal(left_mask, right_mask, left_pos, right_pos):

    def vadd(x, y):
        return [x[0]+y[0],x[1]+y[1]]

    def vsub(x, y):
        return [x[0]-y[0],x[1]-y[1]]

    def vdot(x, y):
        return x[0]*y[0]+x[1]*y[1]

   
    offset = list(map(int, vsub(left_pos, right_pos)))
   
    overlap = left_mask.overlap_area(right_mask, offset)
   
    if overlap == 0:
        return
   
    """Calculate collision normal"""
   
    nx = (left_mask.overlap_area(right_mask,(offset[0]+1,offset[1])) -
          left_mask.overlap_area(right_mask,(offset[0]-1,offset[1])))
    ny = (left_mask.overlap_area(right_mask,(offset[0],offset[1]+1)) -
          left_mask.overlap_area(right_mask,(offset[0],offset[1]-1)))
    if nx == 0 and ny == 0:
        """One sprite is inside another"""
        return
   
    n = [nx, ny]
   
    return n

Fun uses for masks.

Here's a few fun ideas that you could implement with masks, and pixel perfect collision detection.
  • A balloon game, where the bit masks are created from nicely drawn levels - which are then turned into bitmasks for pixel perfect collision detection.  No need to worry about slicing the level up, or manually specifying the collision rectangles, just draw the level and create a mask out of it. Here's a screen shot from the balloon code that comes with this article:

  • A platform game where the ground is not made out of platforms, so much as pixels. So you could have curvy ground, or single pixel things the characters could stand on.
  • Mouse cursor hit detection. Turn your mouse cursor into something, and rather than have a single pixel hit, instead have the hit be any pixel under the mouse cursor.
  • "Worms" style exploding terrain.

Friday, February 24, 2017

setup.cfg - a solution to python config file soup? A howto guide.

Sick of config file soup cluttering up your repo? Me too. However there is a way to at least clean it up for many python tools.


Some of the tools you might use and the config files they support...
  • flake8 - .flake8, setup.cfg, tox.ini, and config/flake8 on Windows
  • pytest - pytest.ini, tox.ini, setup.cfg
  • coverage.py - .coveragerc, setup.cfg, tox.ini
  • mypy - setup.cfg, mypy.ini
  • tox - tox.ini
 Can mypy use setup.cfg as well?
OK, you've convinced me. -- Guido

With that mypy now also supports setup.cfg, and we can all remove many more config files.

The rules for precedence are easy:
  1. read --config-file option - if it's incorrect, exit
  2. read [tool].ini - if correct, stop
  3. read setup.cfg

 

How to config with setup.cfg?

Here's a list to the configuration documentation for setup.cfg.

What does a setup.cfg look like now?

Here's an example setup.cfg for you with various tools configured. (note these are nonsensical example configs, not what I suggest you use!)

## http://coverage.readthedocs.io/en/latest/config.html
#[coverage:run]
#timid = True

## http://pytest.org/latest/customize.html#adding-default-options
# [tool:pytest]
# addopts=-v --cov pygameweb pygameweb/ tests/

## http://mypy.readthedocs.io/en/latest/config_file.html
#[mypy]
#python_version = 2.7

#[flake8]
#max-line-length = 120
#max-complexity = 10
#exclude = build,dist,docs/conf.py,somepackage/migrations,*.egg-info

## Run with: pylint --rcfile=setup.cfg somepackage
#[pylint]
#disable = C0103,C0111
#ignore = migrations
#ignore-docstrings = yes
#output-format = colorized



Monday, February 20, 2017

Is Type Tracing for Python useful? Some experiments.

Type Tracing - as a program runs you trace it and record the types of variables coming in and out of functions, and being assigned to variables.
Is Type Tracing useful for providing quality benefits, documentation benefits, porting benefits, and also speed benefits to real python programs?

Python is now a gradually typed language, meaning that you can gradually apply types and along with type inference statically check your code is correct. Once you have added types to everything, you can catch quite a lot of errors. For several years I've been using the new type checking tools that have been popping up in the python ecosystem. I've given talks to user groups about them, and also trained people to use them. I think a lot of people are using these tools without even realizing it. They see in their IDE warnings about type issues, and methods are automatically completed for them.

But I've always had some thoughts in the back of my head about recording types at runtime of a program in order to help the type inference out (and to avoid having to annotate them manually yourself).

Note, that this technique is a different, but related thing to what is done in a tracing jit compiler.
Some days ago I decided to try Type Tracing out... and I was quite surprised by the results.

I asked myself these questions.

  • Can I store the types coming in and out of python functions, and the types assigned to variables in order to be useful for other things based on tracing the running of a program? (Yes)
  • Can I "Type Trace" a complex program? (Yes, a flask+sqlalchemy app test suite runs)
  • Is porting python 2 code quicker by Type Tracing combined with static type checking, documentation generation, and test generation? (Yes, refactoring is safer with a type checker and no manually written tests)
  • Can I generate better documentation automatically with Type Tracing? (Yes, return and parameter types and example values helps understanding greatly)
  • Can I use the types for automatic property testing? (Yes, hypothesis does useful testing just knowing some types and a few examples... which we recorded with the tracer)
  • Can I use example capture for tests and docs, as well as the types? (Yes)
  • Can I generate faster compiled code automatically just using the recorded types and Cython (Yes).

Benefits from Type Tracing.

Below I try to show that the following benefits can be obtained by combining Type Tracing with other existing python tools.
  • Automate documentation generation, by providing types to the documentation tool, and by collecting some example inputs and outputs.
  • Automate some type annotation.
  • Automatically find bugs static type checking can not. Without full type inference, existing python static type checkers can not find many issues until the types are fully annotated. Type Tracing can provide those types.
  • Speed up Python2 porting process, by finding issues other tools can't. It can also speed things up by showing people types and example inputs. This can greatly help people understand large programs when documentation is limited.
  • Use for Ahead Of Time (AOT) compilation with Cython.
  • Help property testing tools to find simple bugs without manually setting properties.

Tools used to hack something together.

  • coverage (extended the coverage checker to record types as it goes) 
  • mypy (static type checker for python)
  • Hypothesis (property testing... automated test generator)
  • Cython (a compiler for python code, and code with type annotations)
  • jedi (another python static type checker)
  • Sphinx (automatic documentation generator).
  • Cpython (the original C implementation of python)
More details below on the experiments.

Type Tracing using 'coverage'.

Originally I hacked up a set_trace script... and started going. But there really are so many corner cases. Also, I already run the "coverage" tool over the code base I'm working on.

I started with coverage.pytracer.PyTracer, since it's python. Coverage also comes with a faster tracer written in C. So far I'm just using the python one.

The plan later would be to perhaps use CoverageData. Which uses JSON, which means storing the type will be hard sometimes (eg, when they are dynamically generated). However, I think I'm happy to start with easy types. To start simple, I'll just record object types as strings with something like `repr(type(o)) if type(o) is not type else repr(o)`. Well, I'm not sure. So far, I'm happy with hacking everything into my fork of coverage, but to move it into production there is more work to be done. Things like multiprocess, multithreading all need to be handled.

Porting python 2 code with type tracing.

I first started porting code to python 3 in the betas... around 2007. Including some C API modules. I think I worked on one of the first single code base packages. Since then the tooling has gotten a lot better. Compatibility libraries exist (six), lots of people have figured out the dangerous points and documented them. Forward compatibility features were added into the python2.6 and 2.7, and 3.5 releases to make porting easier. However, it can still be hard.

Especially when Python 2 code bases often don't have many tests. Often zero tests. Also, there may be very little documentation, and the original developers have moved on.

But the code works, and it's been in production for a long time, and gets updates occasionally. Maybe it's not updated as often as it's needed because people are afraid of breaking things.

Steps to port to python 3 are usually these:

  1. Understand the code.
  2. Run the code in production (or on a copy of production data).
  3. With a debugger, look at what is coming in and out of functions.
  4. Write tests for everything.
  5. Write documentation.
  6. Run 2to3.
  7. Do lots of manual QA.
  8. Start refactoring.
  9. Repeat. Repeat manually writing tests, docs, and testing manually. Many times.
Remember that writing tests is usually harder than writing the code in the first place.

With type tracing helping to generate docs, types for the type checker, examples for human reading plus for the hypothesis property checker we get a lot more tools to help ensure quality.

A new way to port python2 code could be something like...
  1. Run program under Type Tracing, line/branch coverage, and example capture.
  2. Look at generated types, example inputs and outputs.
  3. Look at generated documentation.
  4. Gradually add type checking info with help of Type Tracing recorded types.
  5. Generate tests automatically with Type Tracing types, examples, and hypothesis automated property testing. Generate empty test stubs for things you still need to test.
  6. Once each module is fully typed, you can statically type check it.
  7. You can cross validate your type checked python code against your original code. Under the Type Tracer.
  8. Refactoring is easier with better docs, static type checks, tests, types for arguments and return values, and example inputs and outputs.
  9. Everything should be ported to work with the new forwards compatibility functionality in python2.7.
  10. Now with your various quality checks in place, you can start porting to python3. Note, you might not have needed to change any of the original code - only add types.
I would suggest the effort is about 1/5th of the normal time it takes to port things. Especially if you want to make sure the chance of introducing errors is very low.

Below are a couple of issues where Type Tracing can help over existing tools.

Integer divide issue.

Here I will show that the 2to3 conversion tool makes a bug with. Also, mypy does not detect a problem with the code.

# int_issue.py
def int_problem(x):
    return x / 4
print(int_problem(3))

$ python2 int_issue.py
0 
$ python3 int_issue.py
0.75

$ mypy --py2 int_issue.py
$ mypy int_issue.py
$ 2to3 int_issue.py
RefactoringTool: Skipping optional fixer: buffer
RefactoringTool: Skipping optional fixer: idioms
RefactoringTool: Skipping optional fixer: set_literal
RefactoringTool: Skipping optional fixer: ws_comma
RefactoringTool: Refactored int_issue.py
--- int_issue.py    (original)
+++ int_issue.py    (refactored)
@@ -3,4 +3,4 @@
 def int_problem(x):
     return x / 4

-print(int_problem(3))
+print((int_problem(3)))
RefactoringTool: Files that need to be modified:
RefactoringTool: int_issue.py

See how when run under python3 it gives a different result?

Can we fix it when Type Tracing adds types?  (Yes)

So, how about if we run the program under type tracing, and record the input types coming in and out? See how it adds a python3 compatible comment about taking an int, and returning an int. This is so that mypy (and other type checkers) can see what it is supposed to take in.
def int_problem(x):
    # type: (int) -> int
    return x / 4
print(int_problem(3))
$ mypy int_issue.py
int_issue.py:5: error: Incompatible return value type (got "float", expected "int")
I'm happy that Yes, Type Tracing combined with mypy can detect this issue whereas mypy can not by itself.


Binary or Text file issue?

Another porting issue not caught by existing tools is trying to do the right thing when a python file is in binary mode or in text mode. If in binary, read() will return bytes, otherwise it might return text.

In theory this could be made to work, however at the time of writing, there is an open issue with "dependent types" or "Factory Pattern" functions in mypy. More information on this, and also a work around I wrote see this issue: https://github.com/python/mypy/issues/2337#issuecomment-280850128

In there I show that you can create your own io.open replacement that always returns one type. eg, open_rw(fname) instead of open(fname, 'rw').

Once you know that .read() will return bytes, then you also know that it can't call .format() in python 3. The solution is to use % string formatting on bytes, which is supported from python3.5 upwards.

x = f.read() # type: bytes

So the answer here is that mypy could likely solve this issue by itself in the future (once things are fully type annotated). But for now, it's good to see combining type tracing with mypy could help detect binary and text encoding issues much faster.

Generating Cython code with recorded types.

I wanted to see if this was possible. So I took the simple example from the cython documentation.

I used my type tracer to transform this python:
def f(x):
    return x**2-x

def integrate_f(a, b, N):
    s = 0
    dx = (b-a)/N
    for i in range(N):
        s += f(a+i*dx)
    return s * dx

Before you look below... take a guess what parameters a, b, and N are? Note, how there are no comments. Note how the variable names are single letter. Note, how there are no tests. There are no examples.

In [2]: %timeit integrate_f(10.4, 2.3, 17)
100000 loops, best of 3: 5.12 µs per loop



Into this Cython code with annotated types after running it through Type Tracing:
In [1]: %load_ext Cython

In [2]: %%cython
   ...: cdef double f(double x):
   ...:     return x**2-x
   ...:
   ...: def integrate_f_c(double a, double b, int N):
   ...:     """
   ...:     :Example:
   ...:     >>> integrate_f_c(10.4, 2.3, 17)
   ...:     -342.34804152249137
   ...:     """
   ...:     cdef int i
   ...:     cdef double s, dx
   ...:     s = 0
   ...:     dx = (b-a)/N
   ...:     for i in range(N):
   ...:         s += f(a+i*dx)
   ...:     return s * dx 
   ...:

In [3]: %timeit integrate_f_c(10.4, 2.3, 17)

10000000 loops, best of 3: 117 ns per loop
Normal python was 5200 nanoseconds. The cython compiled version is 117 nanoseconds.  The result is 44x faster code, and we have all the types annotated, with an example. This helps you understand it a little better than before too.

This was a great result for me. It shows that yes combining Type Tracing with Cython can give improvements over Cython just by itself. Note, that Cython is not only for speeding up simple numeric code. It's also been used to speed up string based code, database access, network access, and game code.

So far I've made a simple mapping of python types to cython types. To make the code more useful would require quite a bit more effort. However, if you use it as a tool to help you write cython code yourself, then it's very useful to speed up that process.

The best cases so far are when it knows all of the types, all of the types have direct cython mappings, and it avoids calling python functions inside the function. In other words, 'pure' functions.

Cross validation for Cython and python versions?

In a video processing project I worked on there were implementations in C, and other assembly implementations of the same functions. A very simple way of testing is to run all the implementations and compare the results. If the C implementation gives the same results as the assembly implementations, then there's a pretty good chance they are correct.

In [1]:  assert integrate_f_c(10.4, 2.3, 17) == integrate_f(10.4, 2.3, 17)

If we have a test runner, we can check if the inputs and outputs are the same between the compiled code and the non compiled code. That is, cross validate implementations against each other for correctness.

Property testing.

The most popular property testing framework Quickcheck from the Haskell world. However, python also has an implementation - Hypothesis. Rather than supply examples, as is usual with unit testing you tell it about properties which hold true.

Can we generate a hypothesis test automatically using just types collected with Type Tracing?

Below we can see some unit tests (example based testing), as well as some Hypothesis tests (property testing). They are for a function "always_add_something(x)", which always adds something to the number given in. As a property, we would say that "always_add_something(x) > x".  That property will hold to be true for every value of x given x is an int.

Note, that the program is fully typed, and passes type checking with mypy. Also note that there is 100% test coverage if I remove the divide by zero error I inserted.

from hypothesis import given
import hypothesis.strategies

from bad_logic_issue import always_add_something, always_add_something_good

def test_always_add_something():# type: () -> None
    #type: () -> None
    assert always_add_something(5) >= 5
    assert always_add_something(200) >= 200

def test_always_add_something_good():
    #type: () -> None
    assert always_add_something_good(5) >= 5
    assert always_add_something_good(200) >= 200

@given(hypothesis.strategies.integers())
def test_always_add_something(x):
    assert always_add_something(x) > x


# Here we test the good one.
@given(hypothesis.strategies.integers())
def test_always_add_something(x):
    assert always_add_something_good(x) > x
Here are two implementations of the function. The first one is a contrived example in order to show two types of logic errors that are quite common. Even 30 year old code used by billions of people has been shown to have these errors. They're sort of hard to find with normal testing methods.

def always_add_something(x):
    # type: (int) -> int
    '''Silly function that is supposed to always add something to x.

    But it doesn't always... even though we have
     - 'complete' test coverage.
     - fully typed
    '''
    r = x #type: int
    if x > 0 and x < 10:
        r += 20
    elif x > 15 and x < 30:
        r //= 0
    elif x > 100:
        r += 30

    return r


def always_add_something_good(x):
    # type: (int) -> int
    '''This one always does add something.
    '''
    return x + 1

Now, hypothesis can find the errors when you write the property that the return value needs to be greater than the input. What about if we just use the types we record with Type Tracing to give hypothesis a chance to test? Hypothesis comes with a number of test strategies which generate many variations of a type. Eg, there is an "integers" strategy.

# Will it find an error just telling hypothesis that it takes an int as input?
@given(hypothesis.strategies.integers())
def test_always_add_something(x):
    always_add_something(x)

It finds the divide by zero issue (when x is 16). However it does not find the other issue, because it still does not know that there is a problem. We haven't told it anything about the result always needing to be greater than the input.
bad_logic_issue.py:13: ZeroDivisionError
-------------------------------------------------------- Hypothesis --------------------------------------------------------
Falsifying example: test_always_add_something(x=16)
The result is that yes, it could find one issue automatically, without having to write any extra test code, just from Trace Typing.

For pure functions, it would be also useful to record some examples for unit test generation.

In conclusion.

I'm happy with the experiment overall. I think it shows it can be a fairly useful technique for making python programs more understandable, faster, and more correct. It can also help speed up porting old python2 code dramatically (especially when that code has limited documentation and tests).

I think the experiment also shows that combining existing python tools (coverage, mypy, Cython, and hypothesis) can give some interesting extra abilities without not too much extra effort. eg. I didn't need to write a robust tracing module, I didn't need to write a static type checker, or a python compiler. However, it would take some effort to turn these into robust general purpose tools. Currently what I have is a collection of fragile hacks, without support for many corner cases :)

For now I don't plan to work on this any more in the short term. (Unless of course someone wants to hire me to port some python2 code. Then I'll work on these tools again since it speeds things up quite a lot).

Any corrections or suggestions? Please leave a comment, or see you on twitter @renedudfield