Thursday, August 30, 2007

PyWeek #5 theme voting on now! 84 entries so far.

84 entries so far for the pyweek game development competition. Some entries are teams, and some are solo efforts.

The themes for this pyweek game development competition are:

  • Twinkle
  • Turmoil
  • Ticking
  • Twisted
  • Tyger, Tyger

    So join up, and vote for a theme. The Pyweek game competition starts soon.


    Friday 2007/08/03 Registration open
    Sunday 2007/08/26 Theme voting underway
    Sunday 2007/09/02 Challenge start
    Sunday 2007/09/09 Challenge end, judging begins
    Sunday 2007/09/23 Judging closes, winners announced
  • Friday, August 24, 2007

    Delight at my new-old recycled laptop

    I don't like to contribute too much to all the pollution that getting new hardware contributes too. There's lots of perfectly good old hardware being given away or sold at low prices in second hand stores and online auction sites.

    A couple of months ago I got a Dell latitude C610 laptop from a place down the road from me that restores and sells old laptops. With only 20 gigs of HD space, and 256 megs of ram it's not anything close to what you'd buy new. However it seems to do ok.

    I also like to use older hardware for developing software - because it's like 'eating my own dogfood' in a way. If I see the performance problems that the slow computers experience - then I can fix them up. All too often I see websites that perform slowly on old machines, or don't fit important information on their small resolution screens.

    My franken box

    The laptop is so nice, that I've mostly retired my old duron 850mhz desktop machine... which I've had since 1997 or so. Well, some parts I have had since then - it's changed power supplies, motherboards, cpus, hard drives, network cards, sound cards, memory and video cards. Over time I've been upgrading parts of it as I found new parts. When people throw away their old graphics card, I replaced my old graphics card with a better one. The same with all the other parts. I don't think any of the parts in it are from the original 1997 machine any more, but I still think of it as the same franken-machine.

    This is the first time since 1997 that I have stopped using that hardware as my main machine. I still have my original home directories though :) I'm also using the mouse from that machine, and a few usb parts... so I guess the franken-box continues! My franken monster of a machine lives on :) I wonder if 20 years from now I'll still be using parts of that original machine configuration - with files from that original DebIan 1.3 install?

    Ubuntu failed to install for some reason - so I stuck with Debian. With Debian installed, everything on the laptop is working. I can even suspend to disk, or ram. The on board sound isn't the best, but that's ok. Even the function keys work for adjusting the volume, and changing the LCD brightness/contrast. It doesn't have built in wireless, but came with a usb wireless adapter. I have a few old pmcia wireless adapters lying around too that I could use (and did when I was in europe last). I never use modems these days, so I'm not sure if the on board modem works. The video card works quite well, with even basic opengl hardware acceleration seeming to work fine. All without the hassle of using a binary driver - the open source video driver works fine.

    Using my old desktop machine over the network - from my slow laptop.

    Firefox is a memory hog, sometimes using up more memory than is even installed in my laptop(256 MiB). So I run firefox now from my old desktop machine over the network.

    Just running firefox from another machine has made this laptop *heaps* more usable.

    btw, it's easy to run X programs from another machine. Just ssh -X host, the -X enables X11 forwarding. It's a tiny bit more laggy over the network - but definitely better than using all that swap.

    I only started doing this a couple of days ago. The downside is that I need to sync a couple of directories on my old desktop with my laptop. The .mozilla directory, and my file download directory. So when I save files with the browser running off my laptop I can access them easily from my laptop. A couple of scripts accomplish this easily. A next step is to automate this in the background. So I don't have to manually run the scripts. I think a watch type script which runs on my old desktop would work fine - or a file event using cron. Although I think just mounting my laptop home directory from the desktop box could be optimal - with maybe the firefox cache dir being mounted on my desktop box(or I could just make the disk cache at 0, and increase the firefox memory cache).

    Another problem with running firefox over the network is sound. Sound doesn't seem to automatically work from firefox when running it over the network. I haven't figured this one out yet - but probably some sort of sound network daemon is needed. I don't think firefox supports network sound by itself. I'll have to investigate network sound...

    Another downside is that I need my desktop machine on whilst I use my laptop. This obviously uses up more power :( Maybe even being worse for the environment than just buying a newer, more efficient laptop. However I only use it during my work day - when my requirements for the computer are higher. So it goes to sleep at night, or when ever I leave the studio. It also has the HD going to sleep, and it often runs in low power mode.

    Network boot would be nice... so I could somehow just get that machine to boot from my laptop... as long as I can get a connection to the network. I haven't found anything that'll allow me to do this from a wireless connection yet. Maybe if linux was running on my router I could do it. Get my router to network boot my desktop box. In the mean time I can just get off my bum and walk over to it.

    I still use my old desktop box for backing up stuff, and will probably use it for any programming that requires lots of disk space. It's also handy for testing out load balancing for web apps, and other programming tasks.

    I plan to upgrade the ram on this laptop at some point... but at the moment 512MB costs around $200AUD. A tiny bit less than what I bought the laptop for. Probably worth it though.

    linux 2.6.23 is looking to be a big improvement for desktop use.

    I'd like to try the swap prefetch linux mods... as well as the memory compression linux mod. I think they'd speed this machine up. It's looking like linux 2.6.23 will have a few desktop fixes that help a lot - the CFS scheduler and swap prefetch.

    However I think the compressed cache work has stalled since the last bit of work done on it for the google summer of code - http://linuxcompressed.sourceforge.net/ There's some interesting research, including benchmarks there for the compressed cache. It makes perfect sense a compressed cache would be faster since hitting the HD is really slow, and compression/decompression is really quick on modern CPUs. There's only a patch for 2.6.21 so I haven't tried it out.

    Here's some benchmarks someone made which measures responsiveness by window moving for the new linux kernel...

    vanilla 2.6.22.5:
    terminal window visible: real time was 59 seconds.
    terminal window covered: real time was 32 seconds.
    terminal window shaded: real time was 135 seconds.
    terminal window iconified: real time was 160 seconds.

    2.6.22.5 with latest CFS patch applied:
    terminal window visible: real time was 27 seconds.
    terminal window covered: real time was 13 seconds.
    terminal window shaded: real time was 21 seconds.
    terminal window iconified: real time was 21 seconds.

    compare to CK's -
    2.6.22-ck1:
    terminal window visible: real time was 25 seconds.
    terminal window covered: real time was 13 seconds.
    terminal window shaded: real time was 22 seconds.
    terminal window iconified: real time was 22 seconds.

    As you can see the 2.22.5 with latest CFS patch seems to be quite a lot more responsive than the vanilla kernel. I've got a feeling it's going to be as big a performance jump as the move from linux 2.2 to 2.4. Well I hope so anyway :)

    Wednesday, August 22, 2007

    plone 3 released

    Plone 3 has been released!!!


    Plone is not beta like those other toy frameworks (django, pylons, paste, turbogears). I guess there are more python web frame works out there after all than some people would like you to believe ;)


    kidding, kidding... don't eat me.


    Should be fun to play with... I haven't used it since the 2.x series - so I'm looking forward to seeing all the improvements. Congratulations to the plone team.

    Pygame weekly mini sprint 2007/08/22

    This week we found and fixed a long time problem with the SRCALPHA flag. Which is the flag used for per pixel alpha (transparent images).

    In fixing that it turned up a few other issues with Surface. There were problems with error checking, and keyword arguments didn't work.

    So we got the error checking working, as well as keyword arguments.


    The major piece of work that went in was the PixelArray from Marcus. This will be what we use to replace surfarray. A PixelArray is like a basic numeric/numpy array, and like what you'd expect returned from a surf.tobuffer() call. With it we will be able to support Numeric, and Numpy by loading them dynamically.

    I still need to write the surfarray.py which will replace the compiled surfarray. There will be a surfarray_numeric, and surfarray_numpy. So it will be backwards compatible, and you'll be able to use the array type you choose. We will include a frombuffer() method for Numeric arrays - which won't require Numeric to be installed to compile. Numpy already has a frombuffer method - so that shouldn't be too hard.

    PixelArray will also support basic slicing functionality like numeric - so you can do things like array[x:y, a:b:z] = ... etc. This is what Marcus is working on next. So there will be basic effects you'll be able to do even without Numpy, or Numeric. As well as being able to send data to PIL, opengl, wx etc without using Numeric. So you won't need the extra dependency for these common use cases.


    In other things I started to play a little bit with the upcoming SDL 1.3. This is the release with opengl and direct3d hardware acceleration, multiple windows, recording sound - and many other goodies. It's not finished yet, but is already fun to play with. Pygame will probably start to support it only when the final version is released - as some of the new APIs could still change. There is a SDL 1.2 compatible API - you just can't access any of the new features with the old API (obviously :)

    pygame 1.8 will take advantages of all the improvements going on in the underlying SDL libraries. All the different users or different languages - C++, ruby, python all feed back into the C SDL. Lots of different engines use it, and heaps of game developers. So it gets *heaps* of users and testing - as well as optimization. Lots of linux distributions test it, as well as people releasing games on different platforms. Pygame 1.8 on the mac will have a whole bunch of altivec optimizations. On windows, and unix there have been a bunch of mmx optimizations too since the last pygame release(1.7.1). The image, sound and font modules have all gotten fixes, speed ups, and bug fixes - improving all of their quality. So not only will pygame get the pygame specific improvements - but it gets the improvements from SDL too. Read the SDL release notes for more details.

    Over on the pyweek site someone has made a python wrapper for the 2d physics library chipmonk. Check out the youtube videos of the physics... Domino pyramid Domino smash. It might be a good thing to add to pygame in the next release after pygame 1.8.

    The other week the SDL_gfx author mentioned that he was interested in collaborating with the pygame people some more to get more of SDL_gfx into pygame, and to move the pygame improvements back into SDL_gfx. The new improvements to SDL_gfx include bezier curves, more antialised shapes, and a textured polygon drawer.

    Sunday, August 19, 2007

    javascript for templates - it's happening.

    It seems like javascript for template languages might be the way forward.

  • designer friendly. lots of web designers know a little javascript or actionscript.
  • can run in a browser.
  • javascript can be sandboxed.
  • most web developers know a little js too. Web developers that don't know javascript will most likely at know at least a little C/php/perl/java to be able to do a lot of things with javascript.

    Tenjin allows javascript templates. It also allows many other scripting languages to be used in templates... but that's not really what I'm talking about.

    Fast, small(200k), opensource and widely deployed javascript/emca script interpreters exist. Tamarin is a emca JIT optimized virtual machine from flash 9, that mozilla is using in upcoming versions of firefox. There is also spidermonkey - the current firefox javascript implementation. Finally there is haxe, which is a emca script like language - that can output .js files, flash swf files, and also neko which runs as a virtual machine in apache. So it would be possible to even use your templates in flash, as well as in normal html browsers.

    However you don't need to use javascript in your templates in order to convert the templates into javascript. Genshi2js can convert genshi templates into javascript, so you can run your templates in a browser, or server side. This shows that you don't need to use the javascript language in order to gain the benefit of using your templates in the browser.

    Language agnostic template languages are important - because a wider selection of people can edit your templates. Javascript usage in the template allows you to get that wider selection of people being able to edit your templates. The genshi2js solution is quite nice, and it proves that you can still compile other templates languages to be executed in the browser - however it is not known by as many people as a javascript.

    An exciting possibility is for applications which serve dynamic pages via haxe->mod_neko with compiled tenjin templates. This would seem like a very fast way to generate dynamic pages from json data sources. The json could quite easily be generated in any language - including python, static files, or any webservice through mod_proxy.

    I'm sticking with python, and php on the server side for the time being - not switching to emca script server side. Python because I like coding python, and php because it is widely installed (and there seems to be more work available). However using javascript for the templates might just be an option I can live with. Especially if I can compile the javascript using templates into php, and python.

    It all depends on the project of course. But I'm definitely considering genshi (with genshi2js), and tenjin for future projects.
  • Friday, August 17, 2007

    Don't trust database input either.

    The database is just another form of input into your program which you should not trust.

    You should validate the data coming from your database as much as you do validating the data going into the database.

    How do you know the database has not been corrupted, or compromised? Or some script on your website is not validating data properly when it updates the database. Or an DB admin decides to edit the database directly and puts in some invalid data.

    What if someone a year from now hooks up another program to the database, which doesn't use the same data validation that you do? Then your program not validating input data from the database will start to fail.

    There are many ways data from the database might not be what you are expecting. Not including people putting data in there maliciously. Like if they somehow get your database password. Or if they find an SQL injection.

    Executing code from the database surely sounds a bit crazy... right? People execute code when they use python pickle to store data. If for any reason above someone can change your database, then you are allowing them to execute code. So if you are storing pickles - please stop being a mental patient, and change to using a safer form of serialisation.

    If you have validation functions, it's fairly easy to reuse that code for validating data from the database. Especially if you are using ORM, MVC, or have a central get() type function.

    Unfortunately validating data from the database seems to be a bit too cpu intensive for some people. That, and it's not as big a risk as many other inputs.

    Thursday, August 16, 2007

    collections in python - using less memory

    Each object in python takes up far more memory than you might think. An int object for example does not take up 4 bytes.


    So creating python objects for each element of a collection of data can use up far more memory than is needed.

    A simple pattern for avoiding this wasted memory is to store the data in a array.array() then construct an object from the part of the data as you need it.


    Using python classes to store an int.
    Virtual 23992
    Resident 21176

    Constructing python classes to store an int dynamically.
    Virtual 6572
    Resident 3992


    As you can see this method can save a *lot* of memory.

    Here's some basic code demonstrating this technique... this isn't necessarily the API to use, but just demonstrates the memory savings. You can make a nicer to use API on top of that... or use your existing api with get magic properties.

    wget http://rene.f0o.com/~rene/stuff/collection_memory.py

    # using python objects...
    python collection_memory.py 100000 -object & sleep 2 ; ps aux | grep python
    # using a collection of python objects, which constructs a class from the raw data.
    python collection_memory.py 100000 & sleep 2 ; ps aux | grep python

    Many uses of python could use this technique to save lots of memory. Anything that operates on a large number of python objects.

  • Python database drivers are one area which could use this technique.

  • Sprite classes (like pygame sprites) could use an array of underlying data to store all the attributes.
  • Wednesday, August 15, 2007

    Pygame weekly mini sprint 2007/08/15

    There's been a few new things going into pygame recently.

    Today two things came off the 'todo before release list'.

    The first was the pygame.mask module was finished - the remaining from_surface function was implemented. It is 128x faster than the version written in python. It could still be optimized more, but I think it should be fast enough.

    The second was the new sprite code from DR0ID. This has been a long time in development, and allows some pretty useful functionality for the pygame.sprite module. Like support for layers, blend modes, as well as automatic detection of what is faster - full screen update, or dirty rect update.

    Now the 'todo before release list' is a lot shorter:

  • windows+mingw compilation instructions
  • remove current C based surfarray which uses Numeric. Replace it with a PixelArray C type. Then implement Numeric and Numpy support in python.
  • Mac OSX scrap fixes - using the new scrap api for clip board support.

    Marcus has the basics of the PixelArray code done, and we have figured out a way to get it all working with numeric, and numpy. It's kind of like implementing a Surface.tobuffer() function - but more general purpose. We are aiming for a situation where people can do quick transfers of data to libraries like PIL, opengl etc without requiring Numeric, like is currently required. We also want to support Numeric, and Numpy without requiring them at C compilation time.

    The mingw compilation instructions are already on the wiki... but are not 100% complete. We want to make it so people can basically do python setup.py install. Right now there are 100 or so steps to do before that will work. We want to be able to download mingw, download all the source dependencies, and download all patches required. Then build and install all the separate pieces. Each step will be optional, so for people who have mingw installed, or SDL installed will be able to skip those steps.

    Hopefully then we can get more people on windows able to modify pygame if they need too. Without spending a week trying to get it to compile.


    On the documentation front I'm thinking of a few things to improve. The first is
    a pygame glossary. Where terms that are used in pygame, graphics, and game programming are explained. At the moment there's only one term described... 'Dirty Rects'... but hopefully more will be added over time.

    The next one is being able to type in module, function, and classes after the pygame.org url. Like for example: pygame.org/image.load -> pygame.org/docs/ref/image.html#pygame.image.load . It won't do any searching (for now), but will just be a little easier to find the docs online.

    The other documentation thing will be something like the pyopengl api guide. For each function and constant it has a link to the use of it in the Demo programs that come with pyopengl. This is really useful when trying to see how a function is used.




    Here's the top of the WHATSNEW file from recent checkins.

    Aug 15, 2007
  • The sprite module has had some big changes from DR0ID. It now has a LayeredUpdates, and LayeredDirty groups. For using layers when rendering the sprites. LayeredDirty is an alternative to RenderUpdates that automatically finds the best display method (either full screen updates, or dirty rect updates). It's faster if you have sprites that don't move. Thanks DR0ID!
  • Added pygame.mask.from_surface which can make a Mask object from a surface. It's 128x faster than the python version!
  • pygame.movie bug fix. Thanks Lenard Lindstrom!

    Jun 25, 2007
  • Removed QNX support from scrap module. Thanks Marcus!
  • Added smoothscale(with MMX!) function from Richard Goedeken

    Jun 27, 2007
  • Fixes from Marcus for ref counting bugs.
  • Also using METH_NOARGS for functions with no arguments. Which should make some functions slightly faster. Thanks Marcus, and thanks Campbell Barton for spotting them.


    It turns out there's some SSE instructions in the mmx scaling code - so they need to be fixed too. I think Lendard Lindstrom has those fixed though - but it still needs testing.
  • Friday, August 10, 2007

    timing and unittests - graphing speed regressions/improvements

    Are there any python tools which allow you to run timing tests inside of unittests, and then see a useful report?

    I'd like to see a report of how long each timing test took, and also see differences between runs. When comparing two runs I'd like to see visually which is faster.

    Timing information is a very important thing to time for gui applications like games, and websites. As so many machines are very different, it's useful to be able to time things as they run on different machines. A game or website can quite easily run 10-30 times slower even on machines with the same CPU. Many other factors like OS, hard drive speed, available memory, installed drivers, directx/opengl/X/frame buffer, different browser speed, different installed versions of libraries or plugins like flash. Testing all these things manually is almost impossible, testing them manually every time something changes is definitely impossible.

    So I want to use this tool pygame unittests specifically, and also for websites. If there's another testing framework which can do timing very well then we could change testing frame works away from unittest.

    So I'd like to be able to save the timing data to a file or a database, and select the runs I'd want compared against.

    Extra points for being able to do that automatically for different revisions of subversion. So I could tell it to checkout different revisions/branches, run the setup.py, run the tests, then show me a report. But that's something I could script later I guess.

    It would be nice if the tests could be separate processes, so I could do something like:

    def some_tests():

    flags = ['1', '2', '3', '4', '5', '6', '7']
    for f in flags:
    my_f = lambda x: os.system("python myscript.py -doit=%s" % f)
    do_test(name = "a test for:%s" % f, my_f)


    So it would just time how long each one takes to run as many times as it needs.

    It'd be nice if I could read what the process prints, and provide a parsing function which can extract timing information from the output. So then when something prints "33FPS" or "blit_blend_ADD 10202094.0093" I could tell the timing framework what those mean.

    Where script would be timed automatically, and I could see results for tests like "a test for 1", "a test for 2" etc.

    That way I could more easily reuse existing timing/benchmark scripts, as well as time code which is not python - like any executable.

    It would be nice to be able to keep the timing tests with the other tests, but be able to disable the timing tests when needed. because, it's likely some timing tests will run for a while, and do things like open many windows (which takes time).

    Website collection would get bonus points. So it could collect the timing information from any machine, so I can compare them all in one place. Since I want to run these timing tests on different machines, and allow random people on the internet to submit their timing tests. Since I don't have 1000's of videocard/OS/cpu/hard disk combinations - it would be useful if anyone could submit their test results. A http collection method would be useful for storing timing information from other languages, like from flash, javascript, or html too.

    Looking at average, mean, fastest run, and slowest run would be good too - rather than just average.

    Merging multiple runs together so you can see the slowest results from each run, or the fastest from each run would be good. Then you do things like combine results for 1000 different machines, then find out the machine configuration for the slowest runs. If there common slow downs then you can try and see if there are any similarities for system configurations. Thus allowing you to direct your optimization work better.

    Being able to run tests with a profiler active, then store the profiling data would be nice. This would be for different profilers like gprof, or a python profiler. So if you can see a profile report from a slow run, and find out which functions you might need to optimize at an even lower level.

    Allowing different system configuration detection programs to run would be good too. So you can see what HD the slow machine was using, how much free ram it had, the load on the machine at the time of running, etc,etc.

    All this tied in with respect for peoples privacy, so they know what data they are supplying, and how long it will be archived for.

    Support for series of time would be nice too. Like giving it a list of how long each frame took to render. So then I could see a graph of that, then compare it over multiple runs. Then you could look at data from many machines, and compare how they are performing, as well as compare different algorithms on different machines.


    Hopefully something already exists which can do some of these things?

    What I've written above is more a wish list, but even a portion of that functionality would allow people to more easily test for performance regressions.

    Wednesday, August 08, 2007

    Pyweek 5 - make a game in a week

    pyweek registration is open. For the biannual game jam. http://www.pyweek.org/5/

    Which means you can join, and put your self in a team, or join up as a solo entrant.

    Spend a week(part time) finishing a game using python. Sunday 2nd September to Sunday 9th of September.

    It is inspired by the ludumdare 48h comps, but people only use python, it is a week long, and there can be teams. Over 100 entrants joined in on the fun previous competitions.

    Enter to have a chance to prototype your next game, or see if working in a team will work on a small project. Or just have a break, to get your creative juices going, and to feel all of the energy of 100+ people simultaneously feeding off each others creations. http://www.pyweek.org/5/

    It's also a great way to learn, and have fun with python. It's possibly the best way to improve your programming skills, and game making skills there is.