Tag: Linux
Setting up a guest network with Unifi APs
I’ve been pretty happy with the Unifi wifi access points I picked up a few months back, but one of the things I hadn’t managed to replicate over my old setup was a guest wifi network.
If I went all-in and bought a Unifi router, this would probably be fairly trivial to set up. But I wanted to build on the equipment I already had for now. Looking at some old docs, I’d need to get trunked VLAN traffic to the APs to separate the main and guest networks.
Running the UniFi Controller under LXD
A while back I bought some UniFi access points. I hadn’t gotten round to setting up the Network Controller software to properly manage them though, so thought I’d dig into setting that up.
Building IoT projects with Ubuntu Core talk
Last week I gave a talk at Perth Linux Users Group about building IoT projects using Ubuntu Core and Snapcraft. The video is now available online. Unfortunately there were some problems with the audio setup leading to some background noise in the video, but it is still intelligible:
The slides used in the talk can be found here.
Performing mounts securely on user owned directories
While working on a feature for snapd, we had a need to perform a "secure bind mount". In this context, "secure" meant:
- The source and/or target of the mount is owned by a less privileged user.
- User processes will continue to run while we're performing the mount (so solutions that involve suspending all user processes are out).
- While we can't prevent the user from moving the mount point, they should not be able to trick us into mounting to locations they don't control (e.g. by replacing the path with a symbolic link).
The main problem is that the mount system call uses string path names to
identify the mount source and target. While we can perform checks on the
paths before the mounts, we have no way to guarantee that the paths
don't point to another location when we move on to the mount()
system
call: a classic time of check to time of
use race
condition.
ThinkPad Infrared Camera
One of the options available when configuring the my ThinkPad was an Infrared camera. The main selling point being "Windows Hello" facial recognition based login. While I wasn't planning on keeping Windows on the system, I was curious to see what I could do with it under Linux. Hopefully this is of use to anyone else trying to get it to work.
The camera is manufactured by Chicony Electronics (probably a CKFGE03 or similar), and shows up as two USB devices:
Tag: Networking
Setting up a guest network with Unifi APs
I’ve been pretty happy with the Unifi wifi access points I picked up a few months back, but one of the things I hadn’t managed to replicate over my old setup was a guest wifi network.
If I went all-in and bought a Unifi router, this would probably be fairly trivial to set up. But I wanted to build on the equipment I already had for now. Looking at some old docs, I’d need to get trunked VLAN traffic to the APs to separate the main and guest networks.
Running the UniFi Controller under LXD
A while back I bought some UniFi access points. I hadn’t gotten round to setting up the Network Controller software to properly manage them though, so thought I’d dig into setting that up.
Tag: UniFi
Setting up a guest network with Unifi APs
I’ve been pretty happy with the Unifi wifi access points I picked up a few months back, but one of the things I hadn’t managed to replicate over my old setup was a guest wifi network.
If I went all-in and bought a Unifi router, this would probably be fairly trivial to set up. But I wanted to build on the equipment I already had for now. Looking at some old docs, I’d need to get trunked VLAN traffic to the APs to separate the main and guest networks.
Running the UniFi Controller under LXD
A while back I bought some UniFi access points. I hadn’t gotten round to setting up the Network Controller software to properly manage them though, so thought I’d dig into setting that up.
Tag: Ubuntu
Running the UniFi Controller under LXD
A while back I bought some UniFi access points. I hadn’t gotten round to setting up the Network Controller software to properly manage them though, so thought I’d dig into setting that up.
Exploring Github Actions
To help keep myself honest, I wanted to set up automated test runs on a few personal projects I host on Github. At first I gave Travis a try, since a number of projects I contribute to use it, but it felt a bit clunky. When I found Github had a new CI system in beta, I signed up for the beta and was accepted a few weeks later.
While it is still in development, the configuration language feels lean and powerful. In comparison, Travis's configuration language has obviously evolved over time with some features not interacting properly (e.g. matrix expansion only working on the first job in a workflow using build stages). While I've never felt like I had a complete grasp of the Travis configuration language, the single page description of Actions configuration language feels complete.
Building IoT projects with Ubuntu Core talk
Last week I gave a talk at Perth Linux Users Group about building IoT projects using Ubuntu Core and Snapcraft. The video is now available online. Unfortunately there were some problems with the audio setup leading to some background noise in the video, but it is still intelligible:
The slides used in the talk can be found here.
Ubuntu Desktop
When the Ubuntu Phone project was cancelled, I moved to the desktop team. The initial goal for team was to bring up a GNOME 3 based desktop for the Ubuntu 17.10 release that would be familiar to both Ubuntu users coming from the earlier Unity desktop, and users of “vanilla” GNOME 3.
Performing mounts securely on user owned directories
While working on a feature for snapd, we had a need to perform a "secure bind mount". In this context, "secure" meant:
- The source and/or target of the mount is owned by a less privileged user.
- User processes will continue to run while we're performing the mount (so solutions that involve suspending all user processes are out).
- While we can't prevent the user from moving the mount point, they should not be able to trick us into mounting to locations they don't control (e.g. by replacing the path with a symbolic link).
The main problem is that the mount system call uses string path names to
identify the mount source and target. While we can perform checks on the
paths before the mounts, we have no way to guarantee that the paths
don't point to another location when we move on to the mount()
system
call: a classic time of check to time of
use race
condition.
ThinkPad Infrared Camera
One of the options available when configuring the my ThinkPad was an Infrared camera. The main selling point being "Windows Hello" facial recognition based login. While I wasn't planning on keeping Windows on the system, I was curious to see what I could do with it under Linux. Hopefully this is of use to anyone else trying to get it to work.
The camera is manufactured by Chicony Electronics (probably a CKFGE03 or similar), and shows up as two USB devices:
Ubuntu Phone and Unity
At the end of 2012, I moved from Ubuntu One to the Unity API Team at Canonical. This team was responsible for various services that supported the Unity desktop shell: most noticeably the search functionality. This work initially focused on the Unity 7 desktop shipping with Ubuntu, but then changed focus to the Unity 8 rewrite used by the Ubuntu Phone project.
Ubuntu One
Ubuntu One was a set of online services provided by Canonical for Ubuntu users. It provided cloud hosted storage for files and structured data, synchronised to the user’s local machine. The Ubuntu One service was discontinued in 2014.
u1ftp: a demonstration of the Ubuntu One API
One of the projects I've been working on has been to improve aspects of the Ubuntu One Developer Documentation web site. While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.
I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients). To help verify that the documentation was useful, I wrote a small program to exercise those APIs. The result is u1ftp: a program that exposes a user's files via an FTP daemon running on localhost. In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.
Launchpad code scanned by Ohloh
Today Ohloh finished importing the Launchpad source code and produced the first source code analysis report. There seems to be something fishy about the reported line counts (e.g. -3,291 lines of SQL), but the commit counts and contributor list look about right. If you're interested in what sort of effort goes into producing an application like Launchpad, then it is worth a look.
Comments:
e -
Have you seen the perl language?
More Rygel testing
In my last post, I said I had trouble getting Rygel's tracker backend to function and assumed that it was expecting an older version of the API. It turns out I was incorrect and the problem was due in part to Ubuntu specific changes to the Tracker package and the unusual way Rygel was trying to talk to Tracker.
The Tracker packages in Ubuntu remove the D-Bus service activation file for the "org.freedesktop.Tracker" bus name so that if the user has not chosen to run the service (or has killed it), it won't be automatically activated. Unfortunately, instead of just calling a Tracker D-Bus method, Rygel was trying to manually activate Tracker via a StartServiceByName() call. This would fail even if Tracker was running, hence my assumption that it was a tracker API version problem.
Ubuntu packages for Rygel
I promised Zeeshan that I'd have a look at his Rygel UPnP Media Server a few months back, and finally got around to doing so. For anyone else who wants to give it a shot, I've put together some Ubuntu packages for Jaunty and Karmic in a PPA here:
Most of the packages there are just rebuilds or version updates of existing packages, but the Rygel ones were done from scratch. It is the first Debian package I've put together from scratch and it wasn't as difficult as I thought it might be. The tips from the "Teach me packaging" workshop at the Canonical All Hands meeting last month were quite helpful.
django-openid-auth
Last week, we released the source code to django-openid-auth. This is a small library that can add OpenID based authentication to Django applications. It has been used for a number of internal Canonical projects, including the sprint scheduler Scott wrote for the last Ubuntu Developer Summit, so it is possible you've already used the code.
Rather than trying to cover all possible use cases of OpenID, it focuses on providing OpenID Relying Party support to applications using Django's django.contrib.auth authentication system. As such, it is usually enough to edit just two files in an existing application to enable OpenID login.
Streaming Vorbis files from Ubuntu to a PS3
One of the nice features of the PlayStation 3 is the UPNP/DLNA media renderer. Unfortunately, the set of codecs is pretty limited, which is a problem since most of my music is encoded as Vorbis. MediaTomb was suggested to me as a server that could transcode the files to a format the PS3 could understand.
Unfortunately, I didn’t have much luck with the version included with Ubuntu 8.10 (Intrepid), and after a bit of investigation it seems that there isn’t a released version of MediaTomb that can send PCM audio to the PS3. So I put together a package of a subversion snapshot in my PPA which should work on Intrepid.
Prague
I arrived in Prague yesterday for the Ubuntu Developer Summit. Including time spent in transit in Singapore and London, the flights took about 30 hours.
As I was flying on BA, I got to experience Heathrow Terminal 5. It wasn't quite as bad as some of the horror stories I'd heard. There were definitely aspects that weren't forgiving of mistakes. For example, when taking the train to the "B" section there was a sign saying that if you accidentally got on the train when you shouldn't have it would take 40 minutes to get back to the "A" section.
Weird GNOME Power Manager error message
Since upgrading to Ubuntu Gutsy I've occasionally been seeing the following notification from GNOME Power Manager:

I'd usually trigger this error by unplugging the AC adapter and then picking suspend from GPM's left click menu.
My first thought on seeing this was "What's a policy timeout, and why
is it not valid?" followed by "I don't remember setting a policy
timeout". Looking at bug
492132 I found a
pointer to the policy_suppression_timeout
gconf value, whose
description gives a bit more information.
On the way to Boston
I am at Narita Airport at the moment, on the way to Boston for some of the meetings being held during UDS. It'll be good to catch up with everyone again.
Hopefully this trip won't be as eventful as the previous one to Florida :)
Schema Generation in ORMs
When Storm was released, one of the comments made was that it did not include the ability to generate a database schema from the Python classes used to represent the tables while this feature is available in a number of competing ORMs. The simple reason for this is that we haven't used schema generation in any of our ORM-using projects.
Furthermore I'd argue that schema generation is not really appropriate for long lived projects where the data stored in the database is important. Imagine developing an application along these lines:
Upgrading to Ubuntu Gutsy
I got round to upgrading my desktop system to Gutsy today. I'd upgraded my laptop the previous week, so was not expecting much in the way of problems.
I'd done the original install on my desktop back in the Warty days, and the root partition was a bit too small to perform the upgrade. As there was a fair bit of accumulated crud, I decided to do a clean install. Things mostly worked, but there were a few problems, which I detail below:
Canonical Shop Open
The new Canonical Shop was opened recently which allows you to buy anything from Ubuntu tshirts and DVDs up to a 24/7 support contract for your server.
One thing to note is that this is the first site using our new Launchpad single sign-on infrastructure. We will be rolling this out to other sites in time, which should give a better user experience to the existing shared authentication system currently in place for the wikis.
gnome-vfs-obexftp 0.4
It hasn't been long since the last gnome-vfs-obexftp release, but I thought it'd be good to get these fixes out before undertaking more invasive development. The new version is available from:
The highlights of this release are:
- If the phone does not provide free space values in the OBEX capability object, do not report this as zero free space. This fixes Nautilus file copy behaviour on a number of Sony Ericsson phones.
- Fix date parsing when the phone returns UTC timestamps in the folder listings.
- Add some tests for the capability object and folder listing XML parsers. Currently has sample data for Nokia 6230, Motorola KRZR K1, and Sony K800i, Z530i and Z710i phones.
These fixes should improve the user experience for owners of some Sony Ericsson phones by letting them copy files to the phone, rather than Nautilus just telling them that there is no free space. Unfortunately, if there isn't enough free space you'll get an error part way through the copy. This is the best that can be done with the information provided by the phone.
Investigating OBEX over USB
I've had a number of requests for USB support in gnome-vfs-obexftp. At
first I didn't have much luck talking to my phone via USB. Running the
obex_test
utility from OpenOBEX gave the following results:
$ obex_test -u
Using USB transport, querying available interfaces
Interface 0: (null)
Interface 1: (null)
Interface 2: (null)
Use 'obex_test -u interface_number' to run interactive OBEX test client
Trying to talk via any of these interface numbers failed. After reading
up a bit, it turned out that I needed to add a udev
rule to give
permissions on my phone. After doing so, I got a better result:
gnome-vfs-obexftp 0.3
I've just released a new version of gnome-vfs-obexftp, which includes the features discussed previously. It can be downloaded from:
The highlights of the release include:
- Sync osso-gwobex and osso-gnome-vfs-extras changes from Maemo Subversion.
- Instead of asking hcid to set up the RFCOMM device for communication, use an RFCOMM socket directly. This is both faster and doesn't require enabling experimental hcid interfaces. Based on work from Bastien Nocera.
- Improve free space calculation for Nokia phones with multiple memory types (e.g. phone memory and a memory card). Now the free space for the correct memory type for a given directory should be returned. This fixes various free-space dependent operations in Nautilus such as copying files.
Any bug reports should be filed in Launchpad at:
FM Radio in Rhythmbox – The Code
Previously, I posted about the FM radio plugin I was working on. I just posted the code to bug 168735. A few notes about the implementation:
- The code only supports Video4Linux 2 radio tuners (since that’s the interface my device supports, and the V4L1 compatibility layer doesn’t work for it). It should be possible to port it support both protocols if someone is interested.
- It does not pass the audio through the GStreamer pipeline. Instead, you need to configure your mixer settings to pass the audio through (e.g. unmute the Line-in source and set the volume appropriately). It plugs in a GStreamer source that generates silence to work with the rest of the Rhythmbox infrastructure. This does mean that the volume control and visualisations won’t work
- No properties dialog yet. If you want to set titles on the stations,
you’ll need to edit
rhythmdb.xml
directly at the moment. - The code assumes that the radio device is
/dev/radio0
.
Other than that, it all works quite well (I've been using it for the last few weeks).
FM Radio in Rhythmbox
I've been working on some FM radio support in Rhythmbox in my spare time. Below is screenshot

At the moment, the basic tuning and mute/unmute works fine with my
DSB-R100. I don't have any UI for adding/removing stations at the
moment though, so it is necessary to edit
~/.gnome2/rhythmbox/rhythmdb.xml
to add them.
Comments:
Joel -
This feature would truly be a welcome addition!
I'm especially pleased it's being developed by a fellow Australian! (If the radio stations are any indication)
FM Radio Tuners in Feisty
I upgraded to Feisty about a month or so ago, and it has been a nice improvement so far. One regression I noticed though was that my USB FM radio tuner had stopped working (or at least, Gnomeradio could no longer tune it).
It turns out that some time between the kernel release found in Edgy and
the one found in Feisty, the dsbr100
driver had been upgraded from the
Video4Linux 1
API to
Video4Linux
2. Now the
driver nominally supports the V4L1 ioctls through the v4l1_compat
, but
it doesn't seem to implement enough V4L2 ioctls to make it usable (the
VIDIOCGAUDIO
ioctl fails).
Launchpad 1.0 Public Beta
As mentioned in the press release, we've got two new high profile projects using us for bug tracking: The Zope 3 Project and The Silva Content Management System. As part of their migration, we imported all their old bug reports (for Zope 3, and for Silva). This was done using the same import process that we used for the SchoolTool import. Getting this process documented so that other projects can more easily switch to Launchpad is still on my todo list.
UTC+9
Daylight saving started yesterday: the first time since 1991/1992 summer
for Western Australia. The legislation finally passed the upper house on
21st November (12 days before the transition date). The updated
tzdata
packages were released on 27th
November (6 days before the transition). So far, there hasn't been an
updated package released for Ubuntu (see bug
72125).
One thing brought up in the Launchpad bug was that not all applications
used the system /usr/share/zoneinfo
time zone database. So other
places that might need updating include:
San Francisco
I arrived in San Francisco today for the
Canonical company conference. Seems like a
nice place, and not too cold :)
. So far I've just gone for a walk
along Fisherman's Wharf for a few hours. There look
On the plane trip, I had a chance to see Last Train to Freo, which I didn't get round to seeing in the cinemas. Definitely worth watching.
Daylight Saving in Western Australia
Like a few other states, Western Australia does not do daylight saving. Recently the state parliament has been discussing a Daylight saving bill. The bill is now before the Legislative Council (the upper house). If the bill gets passed, there will be a 3 year trial followed by a referendum to see if we want to continue.
I hadn't been paying too much attention to it, and had assumed they would be talking about starting the trial next year. But it seems they're actually talking about starting it on 3rd December. So assuming the bill gets passed, there will be less than a month til it starts.
Building obex-method
I published a Bazaar branch of the Nautilus obex method here:
http://bazaar.launchpad.net/~jamesh/+junk/gnome-vfs-obexftp
This version works with the hcid
daemon included with Ubuntu Edgy,
rather than requiring the btcond
daemon from
Maemo.
Some simple instructions on building it:
-
Download and build the
osso-gwobex
library:svn checkout https://stage.maemo.org/svn/maemo/projects/connectivity/osso-gwobex/trunk osso-gwobex
The debian/ directory should work fine to build a package using
debuild
. -
Download and build the obex module:
bzr branch http://bazaar.launchpad.net/~jamesh/+junk/gnome-vfs-obexftp
There is no debian packaging for this — just an
autogen.sh
script.
Playing Around With the Bluez D-BUS Interface
In my previous entry about using the
Maemo obex-module
on the desktop, Johan Hedberg mentioned that
bluez-utils
3.7 included equivalent interfaces to the
osso-gwconnect
daemon used by the method. Since then, the copy of
bluez-utils
in Edgy has been updated to 3.7, and the necessary
interfaces are enabled in hcid
by default.
Before trying to modify the VFS code, I thought I'd experiment a bit
with the D-BUS interfaces via the D-BUS python bindings. Most of the
interesting method calls exist on the org.bluez.Adapter
interface. We
can easily get the default adapter with the following code:
OBEX in Nautilus
When I got my new laptop, one of the features it had that my previous one didn't was Bluetooth support. There are a few Bluetooth related utilities for Gnome that let you send and receive SMS messages and a few other things, but a big missing feature is the ability to transfer files to and from the phone easily.
Ideally, I'd be able to browse the phone's file system using Nautilus. Luckily, the Maemo guys have already done the hard work of writing a gnome-vfs module that speaks the OBEX FTP protocol. I had a go at compiling it on my laptop (running Ubuntu Edgy), and you can see the result below:
Ubuntu Bugzilla Migration Comment Cleanup
Earlier in the year, we migrated the bugs from bugzilla.ubuntu.com
over to Launchpad. This process
involved changes to the bug numbers, since the
Launchpad is used for more than just
Ubuntu and already had a number of bugs
reported in the system.
People often refer to other bugs in comments, which both Bugzilla and Launchpad conveniently turn into links. The changed bug numbers meant that the bug references in the comments ended up pointing to the wrong bugs. The bug import was done one bug at a time, so if bug A referred to bug B but bug B hadn't been imported by the time we were importing bug A, then we wouldn't know what bug number it should be referring to.
Ekiga
I've been testing out Ekiga recently, and so far the experience has been a bit hit and miss.
- Firewall traversal has been unreliable. Some numbers (like the SIPPhone echo test) work great. In some cases, no traffic has gotten through (where both parties were behind Linux firewalls). In other cases, voice gets through in one direction but not the other. Robert Collins has some instructions on setting up siproxd which might solve all this though, so I'll have to try that.
- The default display for the main window is a URI entry box and a dial pad. It would make much more sense to display the user's list of contacts here instead (which are currently in a separate window). I rarely enter phone numbers on my mobile phone, instead using the address book. I expect that most VoIP users would be the same, provided that using the address book is convenient.
- Related to the previous point: the Ekiga.net registration service seems to know who is online and who is not. It would be nice if this information could be displayed next to the contacts.
- Ekiga supports multiple sound cards. It was a simple matter of selecting "Logitech USB Headset" as the input and output device on the audio devices page of the preferences to get it to use my headset. Now I hear the ring on my desktop's speakers, but can use the headset for calls.
- It is cool that Ekiga supports video calls, but I have no video camera on my computer. Even though I disabled video support in the preferences, there is still a lot of knobs and whistles in the UI related to video.
Even though there are still a few warts, Ekiga shows a lot of promise. As more organisations provide SIP gateways become available (such as the UWA gateway), this software will become more important as a way of avoiding expensive phone charges as well as a way of talking to friends/colleagues.
Firefox Ligature Bug Followup
Thought I'd post a followup on my previous post since it generated a bit of interest. First a quick summary:
- It is not an Ubuntu Dapper specific bug. With the appropriate combination of fonts and pango versions, it will exhibit itself on other Pango-enabled Firefox builds (it was verified on the Fedora build too).
- It is not a DejaVu bug, although it is one of the few fonts to exhibit the problem. The simple fact is that not many fonts provide ligature glyphs and include the required OpenType tables for them to be used.
- It isn't a Pango bug. The ligatures are handled correctly in normal GTK applications on Dapper. The bug only occurs with Pango >= 1.12, but that is because older versions did not make use of the OpenType tables in the "basic" shaper (used for latin scripts like english).
- The bug only occurs in the Pango backend, but then the non-Pango renderer doesn't even support ligatures. Furthermore, there are a number of languages that can't be displayed correctly with the non-Pango renderer so it is not very appealing.
The firefox bug is only triggered in the slow, manual glyph positioning code path of the text renderer. This only gets invoked if you have non-default letter or word spacing (such as justified text). In this mode, the width of the normal glyph of the first character in the ligature seems to be used for positioning which results in the overlapping text.
Annoying Firefox Bug
Ran into an annoying Firefox bug after upgrading to Ubuntu Dapper. It seems to affect rendering of ligatures.
At this point, I am not sure if it is an Ubuntu specific bug. The current conditions I know of to trigger the bug are:
- Firefox 1.5 (I am using the 1.5.dfsg+1.5.0.1-1ubuntu10 package).
- Pango rendering enabled (the default for Ubuntu).
- The web page must use a font that contains ligatures and use those ligatures. Since the "DejaVu Sans" includes ligatures and is the default "sans serif" font in Dapper, this is true for a lot of websites.
- The text must be justified (e.g. use the "
text-align: justify
" CSS rule).
If you view a site where these conditions are met with an affected
Firefox build, you will see the bug: ligature glyphs will be used to
render character sequences like "ffi
", but only the advance of the
first character's normal glyph is used before drawing the next glyph.
This results in overlapping glyphs:
London
I've been in London for a bit over a week now at the Launchpad sprint. We've been staying in a hotel near the Excel exhibition centre in Docklands, which has a nice view of the docs and you can see the planes landing at the airport out the windows of the conference rooms.
I met up with James Bromberger (one of the two main organisers of linux.conf.au 2003) on Thursday, which is the first time I've seen him since he left for the UK after the conference.
Launchpad featured on ELER
Launchpad got a mention in the latest Everybody Loves Eric Raymond comic. It is full of inaccuracies though — we use XML-RPC rather than SOAP.
Comments:
opi -
Oh, c'mon. It was quite fun. :-)
Bugzilla to Malone Migration
The Bugzilla migration on Friday went quite well, so we've now got all the old Ubuntu bug reports in Launchpad. Before the migration, we were up to bug #6760. Now that the migration is complete, there are more than 28000 bugs in the system. Here are some quick points to help with the transition:
-
All
bugzilla.ubuntu.com
accounts were migrated to Launchpad accounts with a few caveats:- If you already had a Launchpad account with your bugzilla email address associated with it, then the existing Launchpad account was used.
- No passwords were migrated from Bugzilla, due to differences in the method of storing them. You can set the password on the account at https://launchpad.net/+forgottenpassword.
- If you had a Launchpad account but used a different email to the one on your Bugzilla account, then you now have two Launchpad accounts. You can merge the two accounts at https://launchpad.net/people/+requestmerge.
-
If you have a
bugzilla.ubuntu.com
bug number, you can find the corresponding Launchpad bug number with the following URL:
Ubuntu Bugzilla Migration
The migration is finally going to happen, after much testing of migration code and improvements to Malone.
If all goes well, Ubuntu will be using a single bug tracker again on
Friday (as opposed to the current system where bugs in main
go in
Bugzilla and bugs in universe
go in Malone).
Comments:
Keshav -
Hiiii,
I am Keshav and i am 22. I am working as software dev.engineer in Software Company . I am currently working on Bugzilla. I think i can get some help in understanding how i can migrate bugzilla . Can you provide me the tips and list the actions so that i can come close in making a effective migration functionality
Switch users from XScreenSaver
Joao: you can
configure XScreenSaver to show a "Switch User" button in it's
password dialog (which calls gdmflexiserver
when run). This lets you
start a new X session after the screen has locked. This feature is
turned on in Ubuntu if you want to try it out.
Of course, this is not a full solution, since it doesn't help you switch to an existing session (you'd need to guess the correct Ctrl+Alt+Fn combo). There is code in gnome-screensaver to support this though, giving you a list of sessions you can switch to.
Moving from Bugzilla to Launchpad
One of the things that was discussed at
UBZ was moving Ubuntu's bug
tracking over to Launchpad. The current
situation sees bugs in main
being filed in
bugzilla while bugs in universe
go in
Launchpad. Putting all the bugs in Launchpad is an improvement, since
users only need to go to one system to file bugs.
I wrote the majority of the conversion script before the conference, but made a few important improvements at the conference after discussions with some of the developers. Since the bug tracking system is probably of interest to people who weren't at the conf, I'll outline some of the details of the conversion below:
Avahi on Breezy followup
So after I posted some instructions for setting up Avahi on Breezy, a fair number of people at UBZ did so. For most people this worked fine, but it seems that a few people's systems started spewing a lot of network traffic.
It turns out that the problem was actually caused by the
zeroconf
package
(which I did not suggest installing) rather than Avahi. The zeroconf
package is not needed for service discovery or .local
name lookup, so
if you are at UBZ you should remove the package or suffer the wrath of
Elmo.
Avahi on Breezy
During conferences, it is often useful to be able to connect to connect to other people's machines (e.g. for collaborative editing sessions with Gobby). This is a place where mDNS hostname resolution can come in handy, so you don't need to remember IP addresses.
This is quite easy to set up on Breezy:
- Install the
avahi-daemon
,avahi-utils
andlibnss-mdns
packages from universe. - Restart dbus in order for the new system bus security policies to
take effect with "
sudo invoke-rc.d dbus restart
". - Start
avahi-daemon
with "sudo invoke-rc.d avahi-daemon start
". - Edit
/etc/nsswitch.conf
, and add "mdns
" to the end of the "hosts:
" line.
Now your hostname should be advertised to the local network, and you can
connect to other hosts by name (of the form hostname.local
). You can
also get a list of the currently advertised hosts and services with the
avahi-discover
program.
Ubuntu Below Zero
I've been in Montreal since Wednesday for Ubuntu Below Zero.
As well as being my first time in Canada, it was my first time in transit through the USA. Unlike in most countries, I needed to pass through customs and get a visa waiver even though I was in transit. The visa waiver form had some pretty weird questions, such as whether I was involved in persecutions associated with Nazi Germany or its allies.
DSB-R100 USB Radio Tuner
Picked up a DSB-R100 USB Radio tuner off EBay recently. I did this partly because I have better speakers on my computer than on the radio in that room, and partly because I wanted to play around with timed recordings.
Setting it up was trivial -- the dsbr100
driver got loaded
automatically, and a program to tune the radio
(gnomeradio) was
available in the Ubuntu universe repository. I did need to change the
radio device from /dev/radio
to /dev/radio0
though.
Tag: Plug
PLUG September 2022: Lightning Talks
At the Septeber 2022 PLUG meeting, we held lightning talks. I gave a short talk about recreating old video assets in HD using Inkscape and Pitivi.
PLUG June 2022: Hugo
At the June 2022 Perth Linux Users Group meeting, I gave a talk about building websites with the Hugo static website generator.
PLUG May 2021: GStreamer Editing Services
At the May 2021 Perth Linux Users Group meeting, I gave a talk about using GStreamer Editing Services to programatically construct and render videos. In particular, it outlined how the library was used to prepare BigBlueButton recordings for publication on YouTube.
PLUG July 2020: Github Actions
At the July 2020 Perth Linux Users Group meeting, I gave a talk about Github Actions: the built-in continuous integration system provided by Github.
Building IoT projects with Ubuntu Core talk
Last week I gave a talk at Perth Linux Users Group about building IoT projects using Ubuntu Core and Snapcraft. The video is now available online. Unfortunately there were some problems with the audio setup leading to some background noise in the video, but it is still intelligible:
The slides used in the talk can be found here.
PLUG March 2019: Building IoT projects with Ubuntu Core
At the March 2019 Perth Linux Users Group meeting, I gave a talk about how Ubuntu Core can be used to build IoT projects that are secure and self-updating.
PLUG April 2018: Confined Apps on the Ubuntu Desktop
At the April 2018 Perth Linux Users Group meeting, I gave a talk about the snapd package manager, and how it is used to deploy confined applications on Ubuntu desktops.
PLUG September 2016: Talking to Chromecasts
At the September 2016 Perth Linux Users Group meeting, I gave a talk about writing Chromecast sender applications from scratch. It gave a rundown of how the Chromecast protocol worked, and what sorts of things could be done on the receiver side.
PLUG October 2015: Ubuntu Snappy
At the October 2015 Perth Linux Users Group meeting, I gave a talk about the Ubuntu Snappy. This talk focused on the Ubuntu Core system as it existed back then, and looked at how applications could be deployed on the platform.
PLUG July 2014: Ubuntu Phone
At the July 2014 Perth Linux Users Group meeting, I gave a talk about the Ubuntu Touch/Ubuntu Phone project. This included an overview of getting Ubuntu running on hardware that primarily targeted Android, and how some of the design elements of the Unity Desktop were adapted to a small screen.
Tag: JavaScript
Improved JS Mandelbrot Renderer
Eleven years ago, I wrote a Mandelbrot set generator in JavaScript as a way to test out the then somewhat new Web Workers API, allowing me to make use of multiple cores and not tie up the UI thread with the calculations.
Recently I decided to see how much I could improve it with improvements to the web stack that have happened since then. The result was much faster than what I’d managed previously:
Javascript Mandelbrot Set Fractal Renderer
While at linux.conf.au earlier this year, I started hacking on a Mandelbrot Set fractal renderer implemented in JavaScript as a way to polish my JS skills. In particular, I wanted to get to know the HTML5 Canvas and Worker APIs.
The results turned out pretty well. Click on the image below to try it out:

Clicking anywhere on the fractal will zoom in. You'll need to reload the page to zoom out. Zooming in while the fractal is still being rendered will interrupt the previous rendering job.
Tag: Hugo
PLUG June 2022: Hugo
At the June 2022 Perth Linux Users Group meeting, I gave a talk about building websites with the Hugo static website generator.
Tag: Gnome
Converting BigBlueButton recordings to self-contained videos
When the pandemic lock downs started, my local Linux User Group started looking at video conferencing tools we could use to continue presenting talks and other events to members. We ended up adopting BigBlueButton: as well as being Open Source, it's focus on education made it well suited for presenting talks. It has the concept of a presenter role, and built in support for slides (it sends them to viewers as images, rather than another video stream). It can also record sessions for later viewing.
Exploring Github Actions
To help keep myself honest, I wanted to set up automated test runs on a few personal projects I host on Github. At first I gave Travis a try, since a number of projects I contribute to use it, but it felt a bit clunky. When I found Github had a new CI system in beta, I signed up for the beta and was accepted a few weeks later.
While it is still in development, the configuration language feels lean and powerful. In comparison, Travis's configuration language has obviously evolved over time with some features not interacting properly (e.g. matrix expansion only working on the first job in a workflow using build stages). While I've never felt like I had a complete grasp of the Travis configuration language, the single page description of Actions configuration language feels complete.
Ubuntu Desktop
When the Ubuntu Phone project was cancelled, I moved to the desktop team. The initial goal for team was to bring up a GNOME 3 based desktop for the Ubuntu 17.10 release that would be familiar to both Ubuntu users coming from the earlier Unity desktop, and users of “vanilla” GNOME 3.
Seeking in Transcoded Streams with Rygel
When looking at various UPnP media servers, one of the features I wanted was the ability to play back my music collection through my PlayStation 3. The complicating factor is that most of my collection is encoded in Vorbis format, which is not yet supported by the PS3 (at this point, it doesn't seem likely that it ever will).
Both MediaTomb and Rygel could handle this to an extent, transcoding the audio to raw LPCM data to send over the network. This doesn't require much CPU power on the server side, and only requires 1.4 Mbit/s of bandwidth, which is manageable on most home networks. Unfortunately the only playback controls enabled in this mode are play and stop: if you want to pause, fast forward or rewind then you're out of luck.
Watching iView with Rygel
One of the features of Rygel that I found most interesting was the external media server support. It looked like an easy way to publish information on the network without implementing a full UPnP/DLNA media server (i.e. handling the UPnP multicast traffic, transcoding to a format that the remote system can handle, etc).
As a small test, I put together a server that exposes the ABC's iView service to UPnP media renderers. The result is a bit rough around the edges, but the basic functionality works. The source can be grabbed using Bazaar:
More Rygel testing
In my last post, I said I had trouble getting Rygel's tracker backend to function and assumed that it was expecting an older version of the API. It turns out I was incorrect and the problem was due in part to Ubuntu specific changes to the Tracker package and the unusual way Rygel was trying to talk to Tracker.
The Tracker packages in Ubuntu remove the D-Bus service activation file for the "org.freedesktop.Tracker" bus name so that if the user has not chosen to run the service (or has killed it), it won't be automatically activated. Unfortunately, instead of just calling a Tracker D-Bus method, Rygel was trying to manually activate Tracker via a StartServiceByName() call. This would fail even if Tracker was running, hence my assumption that it was a tracker API version problem.
Ubuntu packages for Rygel
I promised Zeeshan that I'd have a look at his Rygel UPnP Media Server a few months back, and finally got around to doing so. For anyone else who wants to give it a shot, I've put together some Ubuntu packages for Jaunty and Karmic in a PPA here:
Most of the packages there are just rebuilds or version updates of existing packages, but the Rygel ones were done from scratch. It is the first Debian package I've put together from scratch and it wasn't as difficult as I thought it might be. The tips from the "Teach me packaging" workshop at the Canonical All Hands meeting last month were quite helpful.
Sansa Fuze
On my way back from Canada a few weeks ago, I picked up a SanDisk Sansa Fuze media player. Overall, I like it. It supports Vorbis and FLAC audio out of the box, has a decent amount of on board storage (8GB) and can be expanded with a MicroSDHC card. It does use a proprietary dock connector for data transfer and charging, but that's about all I don't like about it. The choice of accessories for this connector is underwhelming, so a standard mini-USB connector would have been preferable since I wouldn't need as many cables.
PulseAudio
It seems to be a fashionable to blog about experiences with PulseAudio, I thought I'd join in.
I've actually had some good experiences with PulseAudio, seeing some tangible benefits over the ALSA setup I was using before. I've got a cheapish surround sound speaker set connected to my desktop. While it gives pretty good sound when all the speakers are used together, it sounds like crap if only the front left/right speakers are used.
Using Twisted Deferred objects with gio
The gio library provides both synchronous and asynchronous interfaces for performing IO. Unfortunately, the two APIs require quite different programming styles, making it difficult to convert code written to the simpler synchronous API to the asynchronous one.
For C programs this is unavoidable, but for Python we should be able to do better. And if you're doing asynchronous event driven code in Python, it makes sense to look at Twisted. In particular, Twisted's Deferred objects can be quite helpful.
Metrics for success of a DVCS
One thing that has been mentioned in the GNOME DVCS debate was that it is as easy to do "git diff" as it is to do "svn diff" so the learning curve issue is moot. I'd have to disagree here.
Traditional Centralised Version Control
With traditional version control systems (e.g. CVS and Subversion) as used by Free Software projects like GNOME, there are effectively two classes of users that I will refer to as "committers" and "patch contributors":
DVCS talks at GUADEC
Yesterday, a BoF was scheduled for discussion of distributed version control systems with GNOME. The BoF session did not end up really discussing the issues of what GNOME needs out of a revision control system, and some of the examples Federico used were a bit snarky.
We had a more productive meeting in the session afterwards where we went over some of the concrete goals for the system. The list from the blackboard was:
Prague
I arrived in Prague yesterday for the Ubuntu Developer Summit. Including time spent in transit in Singapore and London, the flights took about 30 hours.
As I was flying on BA, I got to experience Heathrow Terminal 5. It wasn't quite as bad as some of the horror stories I'd heard. There were definitely aspects that weren't forgiving of mistakes. For example, when taking the train to the "B" section there was a sign saying that if you accidentally got on the train when you shouldn't have it would take 40 minutes to get back to the "A" section.
Inkscape Migrated to Launchpad
Yesterday I performed the migration of Inkscape's bugs from SourceForge.net to Launchpad. This was a full import of all their historic bug data – about 6900 bugs.
As the import only had access to the SF user names for bug reporters,
commenters and assignees, it was not possible to link them up to
existing Launchpad users in most cases. This means that duplicate person
objects have been created with email addresses like
$USERNAME@users.sourceforge.net
.
Weird GNOME Power Manager error message
Since upgrading to Ubuntu Gutsy I've occasionally been seeing the following notification from GNOME Power Manager:

I'd usually trigger this error by unplugging the AC adapter and then picking suspend from GPM's left click menu.
My first thought on seeing this was "What's a policy timeout, and why
is it not valid?" followed by "I don't remember setting a policy
timeout". Looking at bug
492132 I found a
pointer to the policy_suppression_timeout
gconf value, whose
description gives a bit more information.
gnome-vfs-obexftp 0.4
It hasn't been long since the last gnome-vfs-obexftp release, but I thought it'd be good to get these fixes out before undertaking more invasive development. The new version is available from:
The highlights of this release are:
- If the phone does not provide free space values in the OBEX capability object, do not report this as zero free space. This fixes Nautilus file copy behaviour on a number of Sony Ericsson phones.
- Fix date parsing when the phone returns UTC timestamps in the folder listings.
- Add some tests for the capability object and folder listing XML parsers. Currently has sample data for Nokia 6230, Motorola KRZR K1, and Sony K800i, Z530i and Z710i phones.
These fixes should improve the user experience for owners of some Sony Ericsson phones by letting them copy files to the phone, rather than Nautilus just telling them that there is no free space. Unfortunately, if there isn't enough free space you'll get an error part way through the copy. This is the best that can be done with the information provided by the phone.
Investigating OBEX over USB
I've had a number of requests for USB support in gnome-vfs-obexftp. At
first I didn't have much luck talking to my phone via USB. Running the
obex_test
utility from OpenOBEX gave the following results:
$ obex_test -u
Using USB transport, querying available interfaces
Interface 0: (null)
Interface 1: (null)
Interface 2: (null)
Use 'obex_test -u interface_number' to run interactive OBEX test client
Trying to talk via any of these interface numbers failed. After reading
up a bit, it turned out that I needed to add a udev
rule to give
permissions on my phone. After doing so, I got a better result:
TXT records in mDNS
Havoc: for a lot of services advertised via mDNS, the client doesn't have the option of ignoring TXT records if it wants to behave correctly.
For example, the Bonjour Printing Specification puts the underlying print queue name in a TXT record (as multiple printers might be advertised by a single print server). While it says that the server can omit the queue name (in which case the default queue name "auto" is used), a client is not going to be able to do what the user asked without checking for the presence of the record.
gnome-vfs-obexftp 0.3
I've just released a new version of gnome-vfs-obexftp, which includes the features discussed previously. It can be downloaded from:
The highlights of the release include:
- Sync osso-gwobex and osso-gnome-vfs-extras changes from Maemo Subversion.
- Instead of asking hcid to set up the RFCOMM device for communication, use an RFCOMM socket directly. This is both faster and doesn't require enabling experimental hcid interfaces. Based on work from Bastien Nocera.
- Improve free space calculation for Nokia phones with multiple memory types (e.g. phone memory and a memory card). Now the free space for the correct memory type for a given directory should be returned. This fixes various free-space dependent operations in Nautilus such as copying files.
Any bug reports should be filed in Launchpad at:
Stupid Patent Application
I recently received a bug report about the free space calculation in gnome-vfs-obexftp. At the moment, the code exposes a single free space value for the OBEX connection. However, some phones expose multiple volumes via the virtual file system presented via OBEX.
It turns out my own phone does this, which was useful for testing. The
Nokia 6230 can store things on the phone’s memory (named DEV
in the
OBEX capabilities list), or the Multimedia Card (named MMC
). So the
fix would be to show the DEV
free space when browsing folders on DEV
and the MMC
free space when browsing folders on MMC
.
FM Radio in Rhythmbox – The Code
Previously, I posted about the FM radio plugin I was working on. I just posted the code to bug 168735. A few notes about the implementation:
- The code only supports Video4Linux 2 radio tuners (since that’s the interface my device supports, and the V4L1 compatibility layer doesn’t work for it). It should be possible to port it support both protocols if someone is interested.
- It does not pass the audio through the GStreamer pipeline. Instead, you need to configure your mixer settings to pass the audio through (e.g. unmute the Line-in source and set the volume appropriately). It plugs in a GStreamer source that generates silence to work with the rest of the Rhythmbox infrastructure. This does mean that the volume control and visualisations won’t work
- No properties dialog yet. If you want to set titles on the stations,
you’ll need to edit
rhythmdb.xml
directly at the moment. - The code assumes that the radio device is
/dev/radio0
.
Other than that, it all works quite well (I've been using it for the last few weeks).
FM Radio in Rhythmbox
I've been working on some FM radio support in Rhythmbox in my spare time. Below is screenshot

At the moment, the basic tuning and mute/unmute works fine with my
DSB-R100. I don't have any UI for adding/removing stations at the
moment though, so it is necessary to edit
~/.gnome2/rhythmbox/rhythmdb.xml
to add them.
Comments:
Joel -
This feature would truly be a welcome addition!
I'm especially pleased it's being developed by a fellow Australian! (If the radio stations are any indication)
FM Radio Tuners in Feisty
I upgraded to Feisty about a month or so ago, and it has been a nice improvement so far. One regression I noticed though was that my USB FM radio tuner had stopped working (or at least, Gnomeradio could no longer tune it).
It turns out that some time between the kernel release found in Edgy and
the one found in Feisty, the dsbr100
driver had been upgraded from the
Video4Linux 1
API to
Video4Linux
2. Now the
driver nominally supports the V4L1 ioctls through the v4l1_compat
, but
it doesn't seem to implement enough V4L2 ioctls to make it usable (the
VIDIOCGAUDIO
ioctl fails).
ZeroConf support for Bazaar
When at conferences and sprints, I often want to see what someone else is working on, or to let other people see what I am working on. Usually we end up pushing up to a shared server and using that as a way to exchange branches. However, this can be quite frustrating when competing for outside bandwidth when at a conference.
It is possible to share the branch from a local web server, but that still means you need to work out the addressing issues.
gnome-vfs-obexftp 0.1 released
I put out a tarball release of gnome-vfs-obexftp here:
This includes a number of fixes since the work I did in October:
- Fix up some error handling in the dbus code.
- Mark files under the
obex:///
virtual root as being local. This causes Nautilus to process the desktop entries and give us nice icons. - Ship a copy of
osso-gwobex
, built statically into the VFS module. This removes the need to install another shared library only used by one application.
As well as the standard Gnome and D-BUS libraries, you will need
OpenOBEX >= 1.2 and Bluez-Utils >= 3.7. The hcid
daemon must be
started with the -x
flag to enable the experimental D-BUS interfaces
used by the VFS module. You will also need a phone or other device that
supports OBEX FTP :)
UTC+9
Daylight saving started yesterday: the first time since 1991/1992 summer
for Western Australia. The legislation finally passed the upper house on
21st November (12 days before the transition date). The updated
tzdata
packages were released on 27th
November (6 days before the transition). So far, there hasn't been an
updated package released for Ubuntu (see bug
72125).
One thing brought up in the Launchpad bug was that not all applications
used the system /usr/share/zoneinfo
time zone database. So other
places that might need updating include:
Building obex-method
I published a Bazaar branch of the Nautilus obex method here:
http://bazaar.launchpad.net/~jamesh/+junk/gnome-vfs-obexftp
This version works with the hcid
daemon included with Ubuntu Edgy,
rather than requiring the btcond
daemon from
Maemo.
Some simple instructions on building it:
-
Download and build the
osso-gwobex
library:svn checkout https://stage.maemo.org/svn/maemo/projects/connectivity/osso-gwobex/trunk osso-gwobex
The debian/ directory should work fine to build a package using
debuild
. -
Download and build the obex module:
bzr branch http://bazaar.launchpad.net/~jamesh/+junk/gnome-vfs-obexftp
There is no debian packaging for this — just an
autogen.sh
script.
Playing Around With the Bluez D-BUS Interface
In my previous entry about using the
Maemo obex-module
on the desktop, Johan Hedberg mentioned that
bluez-utils
3.7 included equivalent interfaces to the
osso-gwconnect
daemon used by the method. Since then, the copy of
bluez-utils
in Edgy has been updated to 3.7, and the necessary
interfaces are enabled in hcid
by default.
Before trying to modify the VFS code, I thought I'd experiment a bit
with the D-BUS interfaces via the D-BUS python bindings. Most of the
interesting method calls exist on the org.bluez.Adapter
interface. We
can easily get the default adapter with the following code:
OBEX in Nautilus
When I got my new laptop, one of the features it had that my previous one didn't was Bluetooth support. There are a few Bluetooth related utilities for Gnome that let you send and receive SMS messages and a few other things, but a big missing feature is the ability to transfer files to and from the phone easily.
Ideally, I'd be able to browse the phone's file system using Nautilus. Luckily, the Maemo guys have already done the hard work of writing a gnome-vfs module that speaks the OBEX FTP protocol. I had a go at compiling it on my laptop (running Ubuntu Edgy), and you can see the result below:
Gnome-gpg 0.5.0 Released
Over the weekend, I released gnome-gpg
0.5.0.
The main features in this release is support for running without
gnome-keyring-daemon
(of course, you can't save the passphrase
in this mode), and to use the same keyring item name for the passphrase
as Seahorse. The release can be
downloaded here:
I also switched over from Arch to
Bazaar. The conversion was fairly painless
using bzr baz-import-branch
, and means that I have both my
revisions and Colins revisions in a single tree. The branch can be
pulled from:
Vote Counting and Board Expansion
Recently one of the Gnome Foundation directors quit, and there has been a proposal to expand the board by 2 members. In both cases, the proposed new members have been taken from the list of candidates who did not get seats in the last election from highest vote getter down.
While at first this sounds sensible, the voting system we use doesn't provide a way of finding out who would have been selected for the board if a particular candidate was removed from the ballot.
JHBuild Updates
The progress on JHBuild has continued (although I haven't done much in the last week or so). Frederic Peters of JhAutobuild fame now has a CVS account to maintain the client portion of that project in tree.
Perl Modules (#342638)
One of the other things that Frederic has been working on is support for
building Perl modules (which use a Makefile.PL
instead of a configure
script). His initial patchworked fine for tarballs, but by switching
over to the new generic version control code in jhbuild it was possible
to support Perl modules maintained in any of the supported version
control systems without extra effort.
JHBuild Improvements
I've been doing most JHBuild development in my bzr branch recently. If you have bzr 0.8rc1 installed, you can grab it here:
bzr branch http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.dev
I've been keeping a regular CVS import going at
http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.cvs
using Tailor, so
changes people make to module sets in CVS make there way into the bzr
branch. I've used a small hack so that merges back into CVS get
recorded correctly in the jhbuild.cvs
branch:
intltool and po/LINGUAS
Rodney: my
suggestions for intltool were not intended as an attack. I just don't
really see much benefit in intltool providing its own
po/Makefile.in.in
file.
The primary difference between the intltool po/Makefile.in.in
and the
version provided by gettext or glib is that it calls intltool-update
rather than xgettext
to update the PO template, so that strings get
correctly extracted from files types like desktop entries, Bonobo
component registration files, or various other XML files.
Ekiga
I've been testing out Ekiga recently, and so far the experience has been a bit hit and miss.
- Firewall traversal has been unreliable. Some numbers (like the SIPPhone echo test) work great. In some cases, no traffic has gotten through (where both parties were behind Linux firewalls). In other cases, voice gets through in one direction but not the other. Robert Collins has some instructions on setting up siproxd which might solve all this though, so I'll have to try that.
- The default display for the main window is a URI entry box and a dial pad. It would make much more sense to display the user's list of contacts here instead (which are currently in a separate window). I rarely enter phone numbers on my mobile phone, instead using the address book. I expect that most VoIP users would be the same, provided that using the address book is convenient.
- Related to the previous point: the Ekiga.net registration service seems to know who is online and who is not. It would be nice if this information could be displayed next to the contacts.
- Ekiga supports multiple sound cards. It was a simple matter of selecting "Logitech USB Headset" as the input and output device on the audio devices page of the preferences to get it to use my headset. Now I hear the ring on my desktop's speakers, but can use the headset for calls.
- It is cool that Ekiga supports video calls, but I have no video camera on my computer. Even though I disabled video support in the preferences, there is still a lot of knobs and whistles in the UI related to video.
Even though there are still a few warts, Ekiga shows a lot of promise. As more organisations provide SIP gateways become available (such as the UWA gateway), this software will become more important as a way of avoiding expensive phone charges as well as a way of talking to friends/colleagues.
Annoying Firefox Bug
Ran into an annoying Firefox bug after upgrading to Ubuntu Dapper. It seems to affect rendering of ligatures.
At this point, I am not sure if it is an Ubuntu specific bug. The current conditions I know of to trigger the bug are:
- Firefox 1.5 (I am using the 1.5.dfsg+1.5.0.1-1ubuntu10 package).
- Pango rendering enabled (the default for Ubuntu).
- The web page must use a font that contains ligatures and use those ligatures. Since the "DejaVu Sans" includes ligatures and is the default "sans serif" font in Dapper, this is true for a lot of websites.
- The text must be justified (e.g. use the "
text-align: justify
" CSS rule).
If you view a site where these conditions are met with an affected
Firefox build, you will see the bug: ligature glyphs will be used to
render character sequences like "ffi
", but only the advance of the
first character's normal glyph is used before drawing the next glyph.
This results in overlapping glyphs:
Re: Lazy loading
Emmanuel: if you are using a language like Python, you can let the language keep track of your state machine for something like that:
def load_items(treeview, liststore, items):
for obj in items:
liststore.append((obj.get_foo(),
obj.get_bar(),
obj.get_baz()))
yield True
treeview.set_model(liststore)
yield False
def lazy_load_items(treeview, liststore, items):
gobject.idle_add(load_items(treeview, liststore, item).next)
Here, load_items()
is a generator that will iterate over a sequence
like [True, True, ..., True, False]
. The next()
method is used to
get the next value from the iterator. When used as an idle function
with this particular generator, it results in one item being added to
the list store per idle call til we get to the end of the generator
body where the "yield False
" statement results in the idle
function being removed.
Gnome Logo on Slashdot
Recently, Jeff brought up the issue of the use of the old Gnome logo on Slashdot. The reasoning being that since we decided to switch to the new logo as our mark back in 2002, it would be nice if they used that mark to represent stories about us.
Unfortunately this request was shot down by Rob Malda, because the logo is "either ugly or B&W (read:Dull)".
Not to be discouraged, I had a go at revamping the logo to meet Slashdot's high standards. After all, if they were going to switch to the new logo, they would have done so when we first asked. The result is below:
Gnome-gpg 0.4.0 Released
I put out a new release of gnome-gpg containing the fixes I mentioned previously.
The internal changes are fairly extensive, but the user interface remains pretty much the same. The main differences are:
- If you enter an incorrect passphrase, the password prompt will be displayed again, the same as when gpg is invoked normally.
- If an incorrect passphrase is stored in the keyring (e.g. if you
changed your key's passphrase), the passphrase prompt will be
displayed. Previously you would need to use the
--forget-passphrase
option to tell gnome-gpg to ignore the passphrase in the keyring. - The passphrase dialog is now set as a transient for the terminal that spawned it, using the same algorithm as zenity. This means that the passphrase dialog pops up on the same workspace as the terminal, and can't be obscured by the terminal.
Comments:
Marius Gedminas -
Any ideas how to use it with Mutt?
Using Tailor to Convert a Gnome CVS Module
In my previous post, I mentioned using Tailor to import jhbuild into a Bazaar-NG branch. In case anyone else is interested in doing the same, here are the steps I used:
1. Install the tools
First create a working directory to perform the import, and set up tailor. I currently use the nightly snapshots of bzr, which did not work with Tailor, so I also grabbed bzr-0.7:
$ wget http://darcs.arstecnica.it/tailor-0.9.20.tar.gz
$ wget http://www.bazaar-ng.org/pkg/bzr-0.7.tar.gz
$ tar xzf tailor-0.9.20.tar.gz
$ tar xzf bzr-0.7.tar.gz
$ ln -s ../bzr-0.7/bzrlib tailor-0.9.20/bzrlib
2. Prepare a local CVS Repository to import from
Revision Control Migration and History Corruption
As most people probably know, the Gnome project is planning a migration
to Subversion. In contrast, I've
decided to move development of jhbuild over to
bzr
. This decision is a bit easier for
me than for other Gnome modules because:
- No need to coordinate with GDP or GTP, since I maintain the docs and there is no translations.
- Outside of the moduleset definitions, the large majority of development and commits are done by me.
- There aren't really any interesting branches other than the mainline.
I plan to leave the Gnome module set definitions in CVS/Subversion though, since many people help in keeping them up to date, so leaving them there has some value.
gnome-gpg improvement
The gnome-gpg utility makes PGP a bit nicer to use on Gnome with the following features:
- Present a Gnome password entry dialog for passphrase entry.
- Allow the user to store the passphrase in the session or permanent keyring, so it can be provided automatically next time.
Unfortunately there are a few usability issues:
- The anonymous/authenticated user radio buttons are displayed in the password entry dialog, while they aren't needed.
- The passphrase is prompted for even if
gpg
does not require it to complete the operation. - If the passphrase is entered incorrectly, the user is not prompted
for it again like they would be with plain
gpg
. - If an incorrect passphrase is provided by
gnome-keyring-daemon
, you need to remove the item usinggnome-keyring-manager
or use the--force-passphrase
command line argument.
I put together a patch to fix these issues by using gpg
's
--status-fd
/--command-fd
interface. Since this provides status
information to gnome-gpg
, it means it knows when to prompt for and
send the passphrase, and when it gave the wrong passphrase.
Drive Mount Applet (again)
Thomas: that behaviour looks like a bug. Are all of those volumes mountable by the user? The drive mount applet is only meant to show icons for the mount points the user can mount.
Note also that the applet is using the exact same information for the list of drives as Nautilus is. If the applet is confusing, then wouldn't Nautilus's "Computer" window also be confusing?
To help debug things, I wrote a little program to dump all the data
provided by GnomeVFSVolumeMonitor
:
Preferences for the Drive Mount Applet
In my previous article, I outlined the thought process behind the redesign of the drive mount applet. Although it ended up without any preferences, I don't necessarily think that it doesn't need any preferences.
A number of people commented on the last entry requesting a particular preference: the ability to hide certain drives in the drive list. Some of the options include:
- Let the user select which individual drives to display
- Let the user select which classes of drive to display (floppy, cdrom, camera, music player, etc).
- Select whether to display drives only when they are mounted, or only when they are mountable (this applies to drives which contain removable media).
Of these choices, the first is probably the simplest to understand, so might be the best choice. It could be represented in the UI as a list of the available drives with a checkbox next to each. In order to not hide new drives by default, it would probably be best to maintain a list of drives to hide rather than drives to show.
Features vs. Preferences
As most people know, there has been some flamewars accusing Gnome developers of removing options for the benefit of "idiot users". I've definitely been responsible for removing preferences from some parts of the desktop in the past. Probably the most dramatic is the drive mount applet, which started off with a preferences dialog with the following options:
- Mount point: which mount point should the icon watch the state of?
- Update interval: at what frequency should the mount point be polled to check its status?
- Icon: what icon should be used to represent this mount point. A selection of various drive type icons were provided for things like CDs, Floppys, Zip disks, etc.
- Mounted Icon and Unmounted Icon: if "custom" was selected for the above, let the user pick custom image files to display the two states.
- Eject disk when unmounted: whether to attempt to eject the disk when the unmount command is issued.
- Use automount-friendly status test: whether to use a status check that wouldn't cause an automounter to mount the volume in question.
These options (and the applet in general) survived pretty much intact from the Gnome 1.x days. However the rest of Gnome (and the way people use computers in general) had moved forward since then, so it seemed sensible to rethink the preferences provided by the applet:
Re: Pixmap Memory Usage
Glynn: I suspect that the Pixmap memory usage has something to do with image rendering rather than applets in particular doing something stupid. Notice that most other GTK programs seem to be using similar amounts of pixmap memory.
To help test this hypothesis, I used the following Python program:
import gobject, gtk
win = gtk.Window()
win.set_title('Test')
win.connect('destroy', lambda w: gtk.main_quit())
def add_image():
img = gtk.image_new_from_stock(gtk.STOCK_CLOSE,
gtk.ICON_SIZE_BUTTON)
win.add(img)
img.show()
gobject.timeout_add(30000, add_image)
win.show()
gtk.main()
According to xrestop
, this program has low pixmap memory usage when
it starts, but jumps up to similar levels to the other apps after 30
seconds.
Switch users from XScreenSaver
Joao: you can
configure XScreenSaver to show a "Switch User" button in it's
password dialog (which calls gdmflexiserver
when run). This lets you
start a new X session after the screen has locked. This feature is
turned on in Ubuntu if you want to try it out.
Of course, this is not a full solution, since it doesn't help you switch to an existing session (you'd need to guess the correct Ctrl+Alt+Fn combo). There is code in gnome-screensaver to support this though, giving you a list of sessions you can switch to.
DSB-R100 USB Radio Tuner
Picked up a DSB-R100 USB Radio tuner off EBay recently. I did this partly because I have better speakers on my computer than on the radio in that room, and partly because I wanted to play around with timed recordings.
Setting it up was trivial -- the dsbr100
driver got loaded
automatically, and a program to tune the radio
(gnomeradio) was
available in the Ubuntu universe repository. I did need to change the
radio device from /dev/radio
to /dev/radio0
though.
Playing with Google Maps API
I finally got round to playing with the Google Maps API, and the results can be seen here. I took data from the GnomeWorldWide wiki page and merged in some information from the Planet Gnome FOAF file (which now includes the nicknames and hackergotchis).

The code is available here (a BZR branch, but you can easily download the latest versions of the files directly). The code works roughly as follows:
HTTP resource watcher
I've got most of the features of my HTTP resource watching code I was working on for GWeather done. The main benefits over the existing gnome-vfs based code are:
- Simpler API. Just connect to the
updated
signal on the resource object, and you get notified when the resource changes. - Supports
gzip
anddeflate
content encodings, to reduce bandwidth usage. - Keeps track of
Last-Modified
date andEtag
value for the resource so that it can do conditionalGET
s of the resource for simple client side caching. - Supports the
Expires
header. If the update interval is set at 30 minutes but the web server says that the it won't be updated for an hour, then use the longer timeout til the next check. - If a permanent redirect is received, then the new URI is used for future checks.
- If a
410 Gone
response is received, then future checks are not queued (they can be restarted with arefresh()
call).
I've also got some code to watch the HTTP proxy settings in GConf, but that seems to trigger a hang in libsoup (bug 309867).
Bryan's Bazaar Tutorial
Bryan: there are a number of steps you can skip in your little tutorial:
-
You don't need to set
my-default-archive
. If you often work with multiple archives, you can treat working copies for all archives pretty much the same. If you are currently inside a working copy, any branch names you use will be relative to your current one, so you can still use short branch names in almost all cases (this is similar to the reason I don't set$CVSROOT
when working with CVS).
HTTP code in GWeather
One of the things that pisses me off about gweather
is that it
occasionally hangs and stops updating. It is a bit easier to tell when
this has occurred these days, since it is quite obvious something's
wrong if gweather thinks it is night time when it clearly isn't.
The current code uses gnome-vfs
, which isn't the best choice for this
sort of thing. The code is the usual mess you get when turning an
algoithm inside out to work through callbacks in C:
pkg-config patches
I uploaded a few patches to the pkg-config bugzilla recently, which will hopefully make their way into the next release.
The first is related to bug 3097, which has to do with the broken dependent library elimination code added to 0.17.
The patch adds a Requires.private
field to .pc
files that contains a
list of required packages like Requires
currently does, which has the
following properties:
Clipboard Handling
Phillip: your idea about direct client to client clipboard transfers is doable with the current X11 clipboard model:
- Clipboard owner advertises that it can convert selection to some special target type such as "client-to-client-transfer" or similar.
- If the pasting client supports client to client transfer, it can check the list of supported targets for the "client-to-client-transfer" target type and request conversion to that target.
- The clipboard owner returns a string containing details of how to request the data (e.g. hostname/port, or some other scheme that only works for the local host).
- Pasting application contacts the owner out of band and receives the data.
Yes, this requires modifications to applications in order to work correctly, but so would switching to a new clipboard architecture.
Anonymous voting
I put up a proposal for implementing anonymous voting for the foundation elections on the wiki. This is based in part on David's earlier proposal, but simplifies some things based on the discussion on the list and fleshes out the implementation a bit more.
It doesn't really add to the security of the elections process (doing so would require a stronger form of authentication than "can read a particular email account"), but does anonymise the election results and lets us do things like tell the voter that their completed ballot was malformed on submission.
Clipboard Manager
Phillip:
the majority of applications have no cut and paste code in them — they
rely on the cut and paste behaviour of the standard widgets. Since the
standard widgets like GtkEntry
in GTK 2.6 mark their selections as
being savable (in fact, any code that calls gtk_clipboard_set_text()
will have its selection marked as savable). Most of the remaining cases
are ones where you'd want to be selective in what gets saved (e.g.
selecting cell ranges in Gnumeric, or regions of images in Gimp), so
need to be handled specially anyway.
<tt>bgchannel://</tt> Considered Harmful?
Recently Bryan posted about
background
channels
-- a system for automatic updating desktop wallpaper. One of the
features of the design is a new URI scheme based on the same ideas as
webcal://
, which I think is a bad idea (as dobey has also pointed
out).
The usual reasoning for creating a URI scheme like this go something like this:
- You want to be able to perform some action when a link in a web page is clicked.
- The action requires that you know the URI of the link (usually to allow contacting the original server again).
- When the web browser activates a helper application bound to a MIME type, you just get the path to a saved copy of the resource, which doesn't satisfy (2).
- Helper applications for URI types get passed the full URI.
So the solution taken with Apple's iCal and Bryan's background
channels is to strip the http:
off the start of resource's URI, and
replace it with a custom scheme name. This works pretty well for the
general case, but causes problems for a few simple use cases that'll
probably turn out to be more common than you think:
8 March 2005
South Africa
I put up my photos from the trip to Cape Town online. Towards the end there are some photos I took while hiking up Table Mountain.
Building Gnome
It looks like with the Gnome 2.10 release, some packages fail to build from CVS if you are using a version of libtool older than 1.5.12. This is due to the way libtool verifies the version strings — in versions prior to 1.5.12, the check to make sure that the interface version numbers were non negative used a shell pattern that only matched numbers up to 3 digits long.
6 January 2005
Travels
I've put some of the photos from my trip to Mataró, and the short stop over in Japan on the way back. The Mataró set includes a fair number taken around La Sagrida Familia, and the Japan set is mostly of things around the Naritasan temple (I didn't have enough time to get into Tokyo).
Multi-head
A few months back, I got a second monitor for my computer and configured
it in a Xinerama-style setup (I'm actually using the MergedFB
feature
of the radeon driver, but it looks like Xinerama to X clients). Overall
it has been pretty nice, but there are a few things that Gnome could do
a bit nicer in the setup:
8 December 2004
Mataró
I've been in Mataró (about an hour from Barcelona) now since Sunday, and it's quite a nice place. It is a bit cooler than Perth due to it being the middle of Winter here, but the way most of the locals are rugged up you'd think it was a lot colder. It's great to catch up with everyone, and a number of pygtk developers will be turning up over the next few days for the BOF on the weekend.
Nautilus Extensions
One of the changes in the Gnome 2.9 development series is the removal of most of the Bonobo code from Nautilus, which results in a speed boost due to lower complexity and less IPC overhead. This had the effect of breaking existing bonobo based context menus, property pages and views. The first two can be converted to the Nautilus extension interface, but the second has no equivalent in the new code (partly because Nautilus is concentrating on being a file manager these days rather than a universal component shell like it was in the early days).
25 October 2004
Drive Mount Applet
The new drive mount applet is now checked into the HEAD branch of gnome-applets, so will be in Gnome 2.10. There are a few things left to do, such as making it possible to open the file manager as well as unmounting/ejecting it. I did up a screenshot showing what it looks like as an applet.

Libtool
Finally managed to reproduce a particular libtool bug that people have
reported on and off. It does show why some people decide that .la
files are evil, since it doesn't occur when people delete those files
...
20 October 2004
Even More Icon Theme Stuff
To make it a bit easier to correctly display themed icons, I added
support to GtkImage
, so that it is as easy as calling
gtk_image_new_from_icon_name()
or gtk_image_set_from_icon_name()
.
The patch is attached to bug
#155688.
This code takes care of theme changes so the application developer doesn't need to. Once this is in, it should be trivial to add themed icon support to various other widgets that use GtkImage (such as GtkAbout and GtkToolItem).
Drive Mount Applet
I started to look at bringing the drive mount applet from gnome-applts
up to scratch, since it hasn't really had much work done on it other
than porting to the 2.x development platform.
The applet is a classic example of Gnome 1.x user interface complexity. The applet shows a button that can be clicked to mount or unmount a particular mount point. For this simple functionality, it provides the following preferences:
11 October 2004
Looks like we are going to have at least another three years with The Rodent. It also looks like they will have a majority in the senate, which will reduce the senate's effectiveness as a house of review.
We might not have John Howard for the entire term though, since he is of retirement age. NineMSN seems to think that Peter Costello is already the leader.
It also looks like The Democrats senators up for reelection got completely wiped out, with much of their support going to The Greens.
4 October 2004
Icon Theme APIs (continued)
Of course, after recommending that people use
gtk_icon_theme_load_icon()
to perform the icon load and scale the icon
for you, Ross manages to find a
bug in that function.
If the icon is not found in the icon theme, but instead in the legacy
$prefix/share/pixmaps
directory, then gtk_icon_theme_load_icon()
will not scale the image down (it will scale them up if necessary
though).
jhbuild
Jhbuild now includes a notification icon when running in the default terminal mode. The code is loosely based on Davyd's patch, but instead uses Zenity's notification icon support. If you have the HEAD branch of Zenity installed, it should display without any further configuration. Some of the icons are a little difficult to tell apart at notification icon sizes, so it would be good to update some of them.
29 September 2004
Ubuntu seems to have taken off very quickly since the preview release came out a few weeks ago. In general, people seem to like the small tweaks we've made to the default Gnome install. Of course, after the preview came out people found bugs in some of my Gnome patches ...
One of the things we added was the trash applet on the panel. I made a fair number of fixes that make the applet fit in with the desktop a bit better and handle error conditions a bit better.
Applets vs. Notification Icons
It seems that a lot of people get confused by what things on the panel should be applets and what should be notification icons. Originally, the main difference between the two was this:
- The lifecycle of an applet is managed by the panel, which in turn is tied to the lifecycle of the session. So applets generally live for the length of the session (unless they are added/removed part way through a session).
- Notification icons are more transient. Their lifecycle is linked to whatever app they were created by. Once the app exits the notification icon goes away too.
There are some other differences though:
Notification Icons
I decided to go ahead and write the code to allow
Zenity to listen for commands on
stdin. It was pretty easy to add, and Glynn accepted the patch so it is
in the latest CVS version. The main difference between the
implementation and what I described earlier is that you need to pass the
--listen
argument to Zenity to activate this mode (without it, it acts
as a one-shot notification icon where it exits when the icon is clicked
on). The easiest way to use it from a bash script is to tie Zenity to a
file descriptor like this:
14 September 2004
Foundation Elections (continued)
bolsh: as I
said, many real elections make modifications to an idealised STV system
to simplify vote counting. The counting for the .au
senate
elections
sounds like it takes a random sample of votes when transfering
preferences too.
Also, in my description a candidate needed to get more votes than the quota and the quota could be fractional. In contrast, the Australian senate elections say candidates must reach the Droop Quota, which is the smallest integer greater than the quota formula I used. If you are using random sampling for preference transfers so that each ballot has a weight of either 0 or 1, then this is equivalent. However, if you count fractional votes, then it does make a difference.
13 September 2004
Foundation Elections
There has been talk on the foundation list about changing the vote counting procedure to something more fair. The method being proposed is Single Transferable Vote, which is the same system used within a single electorate for the senate vote in the Australian Federal Election. As with the Australian elections, some people have some trouble understanding exactly how it works, so here is a description.
- Each voter orders every candidate on their ballot in order of preference. Each ballot is assigned a weight of 1.
- The ballots are grouped by the first preference.
- If any candidate's total reaches the quota, then they get in. The quota is chosen such that if there are s seats, then at most s candidates can reach the quota. So a candidate must get more than n/(s + 1) first preference votes in order to reach the quota.
- If any candidate gets over the quota, then the highest vote getter is elected, and their votes are redistributed at a reduced strength. If x people voted for the candidate, then the weighting of each of the votes is scaled by (x - q)/x where q is the quota (x - q is the number of votes over the quota). The winning candidate's name is removed from all ballots and we go back to step 2 and repeat to find the next winner.
- If no candidate reaches the quota, then the candidate with the least first preference votes is removed from the election. Their name is removed from all ballots, and we go back to step 2. The votes for the removed candidate are redistributed at the same strength, since they didn't help elect a candidate.
Note that this vote counting system is identical to Instant-runoff voting when there is only a single seat. The quota calculation shows that the winning candidate needs to get more than 50% of the votes to win, as expected.
20 May 2004
Mail Viruses
The barrage of mail viruses and their side effects is getting quite annoying. In the past week, I've had a gnome.org mailing list subscriptions disabled twice. After looking at the mailing list archive, it was pretty obvious why.
The mail server that serves my account is set up to reject windows executables a few other viruses at SMTP delivery time (so it isn't responsible for generating bounces). Unfortunately, a number of viruses got through to the mailing lists and were subsequently rejected before reaching my account. After a certain number of bounces of this type, mailman helpfully disables delivery.
28 April 2003
Red Hat 9
Installed it on a few boxes, and I like what I see so far. The Bluecurve mouse cursors look really nice. It is also good to see some more of my packages included in the distro (fontilus and pyorbit).
Spam
Some spammer has been sending mail with random @daa.com.au addresses in
the From:
field. So far, I have received lots of double bounces, a few
messages asking if we know about the spam, and many automated responses
(some saying the message came from a blocked domain!). The Received
headers indicate that the mail comes from somewhere else, so there
isn't much I can do. I hate spammers.
17 June 2002
Work
Last week, one of the servers died because one of the sticks of memory died. After pulling it out, the system booted fine. It would have been a lot easier to test if I didn't have to open it up to plug a floppy drive in. I now have Memtest86 in the GRUB boot menu. Was pretty easy to set up:
cp memtest.bin /boot grubby --add-kernel="/boot/memtest.bin" --title="Memtest86"
This is the second stick of DDR memory we have had that died; probably due to overheating. As the server has 5 IDE ribbon cables, I might look at getting rounded cables which Jaycar is stocking these days.
12 May 2002
The Call for Papers is out:
http://conf.linux.org.au/pipermail/lca-helpers/2002-May/000109.html
There is also an HTML version on the website, but it doesn't quite match the final version of the CFP (yet).
Beer
Bottled the honey ale today. It will be interesting to see how it tastes in a few weeks. The sweetness was gone, but I could definitely taste the honey still. It should be very nice.
GNOME 2.0
Put out yet another beta of libglade for the GNOME 2.0 beta 5 release which should be comming out this week. I should also make new releases of pygtk and gnome-python as well. I have done a number of improvements to the code generator, so pygtk is a bit more complete. The last gnome-python release no longer compiles with the latest GConf, so it also needs a new release.
Tag: GStreamer
Converting BigBlueButton recordings to self-contained videos
When the pandemic lock downs started, my local Linux User Group started looking at video conferencing tools we could use to continue presenting talks and other events to members. We ended up adopting BigBlueButton: as well as being Open Source, it's focus on education made it well suited for presenting talks. It has the concept of a presenter role, and built in support for slides (it sends them to viewers as images, rather than another video stream). It can also record sessions for later viewing.
GLib integration for the Python asyncio event loop
As an evening project, I've been working on a small library that integrates the GLib main loop with Python's asyncio. I think I've gotten to the point where it might be useful to other people, so have pushed it up here:
https://github.com/jhenstridge/asyncio-glib
This isn't the only attempt to integrate the two event loops, but the other I found (Gbulb) is unmaintained and seems to reimplement a fair bit of the asyncio (e.g. it has its own transport classes). So I thought I'd see if I could write something smaller and more maintainable, reusing as much code from the standard library as possible.
ThinkPad Infrared Camera
One of the options available when configuring the my ThinkPad was an Infrared camera. The main selling point being "Windows Hello" facial recognition based login. While I wasn't planning on keeping Windows on the system, I was curious to see what I could do with it under Linux. Hopefully this is of use to anyone else trying to get it to work.
The camera is manufactured by Chicony Electronics (probably a CKFGE03 or similar), and shows up as two USB devices:
Tag: Guadec
Converting BigBlueButton recordings to self-contained videos
When the pandemic lock downs started, my local Linux User Group started looking at video conferencing tools we could use to continue presenting talks and other events to members. We ended up adopting BigBlueButton: as well as being Open Source, it's focus on education made it well suited for presenting talks. It has the concept of a presenter role, and built in support for slides (it sends them to viewers as images, rather than another video stream). It can also record sessions for later viewing.
DVCS talks at GUADEC
Yesterday, a BoF was scheduled for discussion of distributed version control systems with GNOME. The BoF session did not end up really discussing the issues of what GNOME needs out of a revision control system, and some of the examples Federico used were a bit snarky.
We had a more productive meeting in the session afterwards where we went over some of the concrete goals for the system. The list from the blackboard was:
GUADEC 2003: Libegg and PyORBit
At GUADEC 2003 in Dublin, I gave talks about Libegg and PyORBit.
GUADEC 2002: PyGTK
At GUADEC 2002 in Seville I gave a talk about the state of the Python bindings for GTK and GNOME. At this point, I was recommending people move off the old GTK 1.2 bindings, so this talk covered the process of porting existing applications.
GUADEC 2001: PyGTK
At GUADEC 2001 in Copenhagen, I gave a talk about the work I’d been doing on PyGTK. In particular, it talked about the major rewrite to build on top of ExtensionClass (a precursor of Python 2’s new style classes), and the start of GTK 2.0 support.
GUADEC 2000: Dia and PyGTK
At GUADEC 2000 in Paris, I gave talks about the Dia diagram editor, and my Python bindings for GTK and GNOME.
Tag: Python
Converting BigBlueButton recordings to self-contained videos
When the pandemic lock downs started, my local Linux User Group started looking at video conferencing tools we could use to continue presenting talks and other events to members. We ended up adopting BigBlueButton: as well as being Open Source, it's focus on education made it well suited for presenting talks. It has the concept of a presenter role, and built in support for slides (it sends them to viewers as images, rather than another video stream). It can also record sessions for later viewing.
Using GAsyncResult APIs with Python's asyncio
With a GLib implementation of the Python asyncio event
loop, I can easily mix
asyncio code with GLib/GTK code in the same thread. The next step is
to see whether we can use this to make any APIs more convenient to
use. A good candidate is APIs that make use of GAsyncResult
.
These APIs generally consist of one function call that initiates the
asynchronous job and takes a callback. The callback will be invoked
sometime later with a GAsyncResult
object, which can be passed to a
"finish" function to convert this to the result type relevant to the
original call. This sort of API is a good candidate to convert to an
asyncio coroutine.
Exploring Github Actions
To help keep myself honest, I wanted to set up automated test runs on a few personal projects I host on Github. At first I gave Travis a try, since a number of projects I contribute to use it, but it felt a bit clunky. When I found Github had a new CI system in beta, I signed up for the beta and was accepted a few weeks later.
While it is still in development, the configuration language feels lean and powerful. In comparison, Travis's configuration language has obviously evolved over time with some features not interacting properly (e.g. matrix expansion only working on the first job in a workflow using build stages). While I've never felt like I had a complete grasp of the Travis configuration language, the single page description of Actions configuration language feels complete.
GLib integration for the Python asyncio event loop
As an evening project, I've been working on a small library that integrates the GLib main loop with Python's asyncio. I think I've gotten to the point where it might be useful to other people, so have pushed it up here:
https://github.com/jhenstridge/asyncio-glib
This isn't the only attempt to integrate the two event loops, but the other I found (Gbulb) is unmaintained and seems to reimplement a fair bit of the asyncio (e.g. it has its own transport classes). So I thought I'd see if I could write something smaller and more maintainable, reusing as much code from the standard library as possible.
Extracting BIOS images and tools from ThinkPad update ISOs
With my old ThinkPad, Lenovo provided BIOS updates in the form of Windows executables or ISO images for a bootable CD. Since I had wiped Windows partition, the first option wasn't an option. The second option didn't work either, since it expected me to be using the drive in the base I hadn't bought. Luckily I was able to just copy the needed files out of the ISO image to a USB stick that had been set up to boot DOS.
u1ftp: a demonstration of the Ubuntu One API
One of the projects I've been working on has been to improve aspects of the Ubuntu One Developer Documentation web site. While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.
I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients). To help verify that the documentation was useful, I wrote a small program to exercise those APIs. The result is u1ftp: a program that exposes a user's files via an FTP daemon running on localhost. In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.
Packaging Python programs as runnable ZIP files
One feature in recent versions of Python I hadn't played around with until recently is the ability to package up a multi-module program into a ZIP file that can be run directly by the Python interpreter. I didn't find much information about it, so I thought I'd describe what's necessary here.
Python has had the ability to add ZIP files to the module search path since PEP 273 was implemented in Python 2.3. That can let you package up most of your program into a single file, but doesn't help with the main entry point.
pygpgme 0.3
This week I put out a new release of pygpgme: a Python extension that lets you perform various tasks with OpenPGP keys via the GPGME library. The new release is available from both Launchpad and PyPI.
There aren't any major new extensions to the API, but this is the first release to support Python 3 (Python 2.x is still supported though). The main hurdle was ensuring that the module correctly handled text vs. binary data. The split I ended up on was to treat most things as text (including textual representations of binary data such as key IDs and fingerprints), and treat the data being passed into or returned from the encryption, decryption, signing and verification commands as binary data. I haven't done a huge amount with the Python 3 version of the module yet, so I'd appreciate bug reports if you find issues.
Watching iView with Rygel
One of the features of Rygel that I found most interesting was the external media server support. It looked like an easy way to publish information on the network without implementing a full UPnP/DLNA media server (i.e. handling the UPnP multicast traffic, transcoding to a format that the remote system can handle, etc).
As a small test, I put together a server that exposes the ABC's iView service to UPnP media renderers. The result is a bit rough around the edges, but the basic functionality works. The source can be grabbed using Bazaar:
django-openid-auth
Last week, we released the source code to django-openid-auth. This is a small library that can add OpenID based authentication to Django applications. It has been used for a number of internal Canonical projects, including the sprint scheduler Scott wrote for the last Ubuntu Developer Summit, so it is possible you've already used the code.
Rather than trying to cover all possible use cases of OpenID, it focuses on providing OpenID Relying Party support to applications using Django's django.contrib.auth authentication system. As such, it is usually enough to edit just two files in an existing application to enable OpenID login.
Getting "bzr send" to work with GMail
One of the nice features of Bazaar is the ability to send a bundle of changes to someone via email. If you use a supported mail client, it will even open the composer with the changes attached. If your client isn't supported, then it'll let you compose a message in your editor and then send it to an SMTP server.
GMail is not a supported mail client, but there are a few work arounds listed on the wiki. Those really come down to using an alternative mail client (either the editor or Mutt) and sending the mails through the GMail SMTP server. Neither solution really appealed to me. There doesn't seem to be a programatic way of opening up GMail's compose window and adding an attachment (not too surprising for a web app).
Using Twisted Deferred objects with gio
The gio library provides both synchronous and asynchronous interfaces for performing IO. Unfortunately, the two APIs require quite different programming styles, making it difficult to convert code written to the simpler synchronous API to the asynchronous one.
For C programs this is unavoidable, but for Python we should be able to do better. And if you're doing asynchronous event driven code in Python, it makes sense to look at Twisted. In particular, Twisted's Deferred objects can be quite helpful.
Thoughts on OAuth
I've been playing with OAuth a bit lately. The OAuth specification fulfills a role that some people saw as a failing of OpenID: programmatic access to websites and authenticated web services. The expectation that OpenID would handle these cases seems a bit misguided since the two uses cases are quite different:
- OpenID is designed on the principle of letting arbitrary OpenID providers talk to arbitrary relying parties and vice versa.
- OpenID is intentionally vague about how the provider authenticates the user. The only restriction is that the authentication must be able to fit into a web browsing session between the user and provider.
While these are quite useful features for a decentralised user authentication scheme, the requirements for web service authentication are quite different:
Django support landed in Storm
Since my last article on integrating Storm with Django, I've merged my changes to Storm's trunk. This missed the 0.13 release, so you'll need to use Bazaar to get the latest trunk or wait for 0.14.
The focus since the last post was to get Storm to cooperate with Django's built in ORM. One of the reasons people use Django is the existing components that can be used to build a site. This ranges from the included user management and administration code to full web shop implementations. So even if you plan to use Storm for your Django application, your application will most likely use Django's ORM for some things.
Transaction Management in Django
In my previous post about Django, I mentioned that I found the transaction handling strategy in Django to be a bit surprising.
Like most object relational mappers, it caches information retrieved from the database, since you don't want to be constantly issuing SELECT queries for every attribute access. However, it defaults to commiting after saving changes to each object. So a single web request might end up issuing many transactions:
Change object 1 | Transaction 1 |
Change object 2 | Transaction 2 |
Change object 3 | Transaction 3 |
Change object 4 | Transaction 4 |
Change object 5 | Transaction 5 |
Unless no one else is accessing the database, there is a chance that other users could modify objects that the ORM has cached over the transaction boundaries. This also makes it difficult to test your application in any meaningful way, since it is hard to predict what changes will occur at those points. Django does provide a few ways to provide better transactional behaviour.
Storm 0.13
Yesterday, Thomas rolled the 0.13 release of Storm, which can be downloaded from Launchpad. Storm is the object relational mapper for Python used by Launchpad and Landscape, so it is capable of supporting quite large scale applications. It is seven months since the last release, so there is a lot of improvements. Here are a few simple statistics:
0.12 | 0.13 | Change | |
---|---|---|---|
Tarball size (KB) | 117 | 155 | 38 |
Mainline revisions | 213 | 262 | 49 |
Revisions in ancestry | 552 | 875 | 323 |
So it is a fairly significant update by any of these metrics. Among the new features are:
Using Storm with Django
I've been playing around with Django a bit for work recently, which has been interesting to see what choices they've made differently to Zope 3. There were a few things that surprised me:
- The ORM and database layer defaults to autocommit mode rather than using transactions. This seems like an odd choice given that all the major free databases support transactions these days. While autocommit might work fine when a web application is under light use, it is a recipe for problems at higher loads. By using transactions that last for the duration of the request, the testing you do is more likely to help with the high load situations.
- While there is a middleware class to enable request-duration transactions, it only covers the database connection. There is no global transaction manager to coordinate multiple DB connections or other resources.
- The ORM appears to only support a single connection for a request. While this is the most common case and should be easy to code with, allowing an application to expand past this limit seems prudent.
- The tutorial promotes schema generation from Python models, which I feel is the wrong choice for any application that is likely to evolve over time (i.e. pretty much every application). I've written about this previously and believe that migration based schema management is a more workable solution.
- It poorly reinvents thread local storage in a few places. This isn't too surprising for things that existed prior to Python 2.4, and probably isn't a problem for its default mode of operation.
Other than these things I've noticed so far, it looks like a nice framework.
How not to do thread local storage with Python
The Python standard library contains a
function called thread.get_ident()
. It will return an integer that
uniquely identifies the current thread at that point in time. On most
UNIX systems, this will be the pthread_t
value returned by
pthread_self()
. At first look, this might seem like a good value to
key a thread local storage dictionary with. Please don't do that.
The value uniquely identifies the thread only as long as it is running. The value can be reused after the thread exits. On my system, this happens quite reliably with the following sample program printing the same ID ten times:
Psycopg migrated to Bazaar
Last week we moved psycopg from Subversion to Bazaar. I did the migration using Gustavo Niemeyer's svn2bzr tool with a few tweaks to map the old Subversion committer IDs to the email address form conventionally used by Bazaar.
The tool does a good job of following tree copies and create related Bazaar branches. It doesn't have any special handling for stuff in the tags/ directory (it produces new branches, as it does for other tree copies). To get real Bazaar tags, I wrote a simple post-processing script to calculate the heads of all the branches in a tags/ directory and set them as tags in another branch (provided those revisions occur in its ancestry). This worked pretty well except for a few revisions synthesised by a previous cvs2svn migration. As these tags were from pretty old psycopg 1 releases I don't know how much it matters.
Psycopg2 2.0.7 Released
Yesterday Federico released version 2.0.7 of psycopg2 (a Python database adapter for PostgreSQL). I made a fair number of the changes in this release to make it more usable for some of Canonical's applications. The new release should work with the development version of Storm, and shouldn't be too difficult to get everything working with other frameworks.
Some of the improvements include:
- Better selection of exceptions based on the SQLSTATE result field. This causes a number of errors that were reported as ProgrammingError to use a more appropriate exception (e.g. DataError, OperationalError, InternalError). This was the change that broke Storm's test suite as it was checking for ProgrammingError on some queries that were clearly not programming errors.
- Proper error reporting for commit() and rollback(). These methods now use the same error reporting code paths as execute(), so an integrity error on commit() will now raise IntegrityError rather than OperationalError.
- The compile-time switch that controls whether the display_size member of Cursor.description is calculated is now turned off by default. The code was quite expensive and the field is of limited use (and not provided by a number of other database adapters).
- New QueryCanceledError and TransactionRollbackError exceptions. The first is useful for handling queries that are canceled by statement_timeout. The second provides a convenient way to catch serialisation failures and deadlocks: errors that indicate the transaction should be retried.
- Fixes for a few memory leaks and GIL misuses. One of the leaks was in the notice processing code that could be particularly problematic for long-running daemon processes.
- Better test coverage and a driver script to run the entire test suite in one go. The tests should all pass too, provided your database cluster uses unicode (there was a report just before the release of one test failing for a LATIN1 cluster).
If you're using previous versions of psycopg2, I'd highly recommend upgrading to this release.
Running Valgrind on Python Extensions
As most developers know, Valgrind is an invaluable tool for finding memory leaks. However, when debugging Python programs the pymalloc allocator gets in the way.
There is a Valgrind suppression file distributed with Python that gets rid of most of the false positives, but does not give particularly good diagnostics for memory allocated through pymalloc. To properly analyse leaks, you often need to recompile Python with pymalloc.
As I don't like having to recompile Python I took a look at Valgrind's client API, which provides a way for a program to detect whether it is running under Valgrind. Using the client API I was able to put together a patch that automatically disables pymalloc when appropriate. It can be found attached to bug 2422 in the Python bug tracker.
Two‐Phase Commit in Python's DB‐API
Marc uploaded a new revision of the Python DB-API 2.0 Specification yesterday that documents the new two phase commit extension that I helped develop on the db-sig mailing list.
My interest in this started from the desire to support two phase commit in Storm – without that feature there are far fewer occasions where its ability to talk to multiple databases can be put to use. As I was doing some work on psycopg2 for Launchpad, I initially put together a PostgreSQL specific patch, which was (rightly) rejected by Federico.
Zeroconf Branch Sharing with Bazaar
When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.
Re: Python factory-like type instances
Nicolas:
Your metaclass example is a good example of when not to use metaclasses.
I wouldn't be surprised if it is executed slightly different to how you
expect. Let's look at how Foo
is evaluated, starting with what's
written:
class Foo:
__metaclass__ = FooMeta
This is equivalent to the following assignment:
Foo = FooMeta('Foo', (), {...})
As FooMeta
has an __new__()
method, the attempt to instantiate
FooMeta
will result in it being called. As the return value of
__new__()
is not a FooMeta
instance, there is no attempt to call
FooMeta.__init__()
. So we could further simplify the code to:
urlparse considered harmful
Over the weekend, I spent a number of hours tracking down a bug caused
by the cache in the Python urlparse
module. The problem
has already been reported as Python bug
1313119, but has not been fixed
yet.
First a bit of background. The urlparse
module does what you'd expect
and parses a URL into its components:
>>> from urlparse import urlparse
>>> urlparse('http://www.gnome.org/')
('http', 'www.gnome.org', '/', '', '', '')
As well as accepting byte strings (which you'd be using at the HTTP protocol level), it also accepts Unicode strings (which you'd be using at the HTML or XML content level):
Storm Released
This week at the EuroPython conference, Gustavo Niemeyer announced the release of Storm and gave a tutorial on using it.
Storm is a new object relational mapper for Python that was developed for use in some Canonical projects, and we've been working on moving Launchpad over to it. I'll discuss a few of the nice features of the package:
Loose Binding Between Database Connections and Classes
Storm has a much looser binding between database connections and the classes used to represent records in particular tables. The standard way of querying the database uses a store object:
ZeroConf support for Bazaar
When at conferences and sprints, I often want to see what someone else is working on, or to let other people see what I am working on. Usually we end up pushing up to a shared server and using that as a way to exchange branches. However, this can be quite frustrating when competing for outside bandwidth when at a conference.
It is possible to share the branch from a local web server, but that still means you need to work out the addressing issues.
Python time.timezone / time.altzone edge case
While browsing the log of one of my Bazaar branches, I noticed that the commit messages were being recorded as occurring in the +0800 time zone even though WA switched over to daylight savings.
Bazaar stores commit dates as a standard UNIX seconds since epoch value
and a time zone offset in seconds. So the problem was with the way that
time zone offset was recorded. The code in bzrlib
that calculates the
offset looks like this:
Recovering a Branch From a Bazaar Repository
In my previous entry, I mentioned that Andrew was actually publishing the contents of all his Bazaar branches with his rsync script, even though he was only advertising a single branch. Yesterday I had a need to actually do this, so I thought I'd detail how to do it.
As a refresher, a Bazaar repository stores the revision graph for the ancestry of all the branches stored inside it. A branch is essentially just a pointer to the head revision of a particular line of development. So if the branch has been deleted but the data is still in the repository, recovering it is a simple matter of discovering the identifier for the head revision.
UTC+9
Daylight saving started yesterday: the first time since 1991/1992 summer
for Western Australia. The legislation finally passed the upper house on
21st November (12 days before the transition date). The updated
tzdata
packages were released on 27th
November (6 days before the transition). So far, there hasn't been an
updated package released for Ubuntu (see bug
72125).
One thing brought up in the Launchpad bug was that not all applications
used the system /usr/share/zoneinfo
time zone database. So other
places that might need updating include:
Playing Around With the Bluez D-BUS Interface
In my previous entry about using the
Maemo obex-module
on the desktop, Johan Hedberg mentioned that
bluez-utils
3.7 included equivalent interfaces to the
osso-gwconnect
daemon used by the method. Since then, the copy of
bluez-utils
in Edgy has been updated to 3.7, and the necessary
interfaces are enabled in hcid
by default.
Before trying to modify the VFS code, I thought I'd experiment a bit
with the D-BUS interfaces via the D-BUS python bindings. Most of the
interesting method calls exist on the org.bluez.Adapter
interface. We
can easily get the default adapter with the following code:
Launchpad enterered into Python bug tracker competition
The Python developers have been looking for a new bug tracker, and essentially put out a tender for people interested in providing a bug tracker. Recently I have been working on getting Launchpad's entry ready, which mainly involved working on SourceForge import.
The entry is now up, and our demonstration server is up and running with a snapshot of the Python bug tracker data.
As a side effect of this, we've got fairly good SourceForge tracker import support now, which we should be able to use if other projects want to switch away from SF.
Re: Lazy loading
Emmanuel: if you are using a language like Python, you can let the language keep track of your state machine for something like that:
def load_items(treeview, liststore, items):
for obj in items:
liststore.append((obj.get_foo(),
obj.get_bar(),
obj.get_baz()))
yield True
treeview.set_model(liststore)
yield False
def lazy_load_items(treeview, liststore, items):
gobject.idle_add(load_items(treeview, liststore, item).next)
Here, load_items()
is a generator that will iterate over a sequence
like [True, True, ..., True, False]
. The next()
method is used to
get the next value from the iterator. When used as an idle function
with this particular generator, it results in one item being added to
the list store per idle call til we get to the end of the generator
body where the "yield False
" statement results in the idle
function being removed.
pygpgme 0.1 released
Back in January I started working on a new Python wrapper for the GPGME library. I recently put out the first release:
This library allows you to encrypt, decrypt, sign and verify messages in the OpenPGP format, using gpg as the backend. In general, it stays fairly close to the C API with the following changes:
- Represent C structures as Python classes where appropriate (e.g. contexts, keys, etc). Operations on those data types are converted to methods.
- The
gpgme_data_t
type is not exposed directly. Instead, any Python object that looks like a file object can be passed (including StringIO objects). - In cases where there are
gpgme_op_XXXX()
andgpgme_op_XXXX_result()
function pairs, these have been replaced by a singlegpgme.Context.XXXX()
method. Errors are returned in the exception where appropriate. - No explicit memory management. As expected for a Python module, memory management is automatic.
The module also releases the global interpreter lock over calls that fork gpg subprocesses. This should make the module multithread friendly.
Python class advisors
Anyone who has played with Zope 3 has probably seen the syntax used to declare what interfaces a particular class implements. It looks something like this:
class Foo:
implements(IFoo, IBar)
...
This leads to the following question: how can a function call inside a class definition's scope affect the resulting class? To understand how this works, a little knowledge of Python metaclasses is needed.
Metaclasses
In Python, classes are instances of metaclasses. For new-style
classes, the default metaclass is type
(which happens to be its own
metaclass). When you create a new class or subclass, you are creating
a new instance of the metaclass. The constructor for a metaclass takes
three arguments: the class's name, a tuple of the base classes and a
dictionary attributes and methods. So the following two definitions of
the class C
are equivalent:
Version control discussion on the Python list
The Python developers have been discussing a migration off CVS on the python-dev mailing list. During the discussion, Bazaar-NG was mentioned. A few posts of note:
- Mark Shuttleworth provides some information on the Bazaar roadmap. Importantly, Bazaar-NG will become Bazaar 2.0.
- Steve Alexander describes how we use Bazaar to develop Launchpad. This includes a description of the branch review process we use to integrate changes into the mainline.
I'm going to have to play around with bzr
a bit more, but it looks
very nice (and should require less typing than baz
...)
Overriding Class Methods in Python
One of the features added back in Python 2.2 was class methods. These differ from traditional methods in the following ways:
- They can be called on both the class itself and instances of the class.
- Rather than binding to an instance, they bind to the class. This means that the first argument passed to the method is a class object rather than an instance.
For most intents and purposes, class methods are written the same way as normal instance methods. One place that things differ is overriding a class method in a subclass. The following simple example demonstrates the problem:
Python Challenge
Found out about The Python Challenge. While you don't need to use Python to solve most of the problems, a knowledge of the language certainly helps. While the initial problems are fairly easy, some of the later ones are quite difficult, and cover many topics.
If you decide to have a go, here are a few hints that might help:
- Keep a log of what you do. Solutions to may provide insight into subsequent problems.
- Look at ALL the information provided to you. If the solution isn't apparent, look for patterns in the information and extrapolate.
- If you are using brute force to solve a problem, there is probably a quicker and simpler method to get the answer.
- If you get stuck, check the forum for hints.
There is also a solutions wiki, however, you need to have solved the corresponding problem before it will give you access.
8 April 2005
Tracing Python Programs
I was asked recently whether there was an equivalent of sh -x
for
Python (ie. print out each statement before it is run), to help with
debugging a script. It turns out that there is a module in the Python
standard library to do so, but it isn't listed in the standard library
reference for some reason.
To use it, simply run the program like this:
/usr/lib/python2.4/trace.py -t program.py
This'll print out the filename, line number and contents of that line
before executing the code. If you want to skip the output for the
standard library (ie. only show statements from your own code), simply
pass --ignore-dir=/usr/lib/python2.4
(or similar) as an option.
Python Unicode Weirdness
While discussing unicode on IRC with owen, we ran into a peculiarity in Python's unicode handling. It can be tested with the following code:
>>> s = u'\U00010001\U00010002' >>> len(s) >>> s[0]
Python can be compiled to use either 16-bit or 32-bit widths for
characters in its unicode strings (16-bit being the default). When
compiled in 32-bit mode, the results of the last two statements are 2
and u'\U00010001'
respectively. When compiled in 16-bit mode, the
results are 4
and u'\ud800'
.
Tag: Asyncio
Using GAsyncResult APIs with Python's asyncio
With a GLib implementation of the Python asyncio event
loop, I can easily mix
asyncio code with GLib/GTK code in the same thread. The next step is
to see whether we can use this to make any APIs more convenient to
use. A good candidate is APIs that make use of GAsyncResult
.
These APIs generally consist of one function call that initiates the
asynchronous job and takes a callback. The callback will be invoked
sometime later with a GAsyncResult
object, which can be passed to a
"finish" function to convert this to the result type relevant to the
original call. This sort of API is a good candidate to convert to an
asyncio coroutine.
GLib integration for the Python asyncio event loop
As an evening project, I've been working on a small library that integrates the GLib main loop with Python's asyncio. I think I've gotten to the point where it might be useful to other people, so have pushed it up here:
https://github.com/jhenstridge/asyncio-glib
This isn't the only attempt to integrate the two event loops, but the other I found (Gbulb) is unmaintained and seems to reimplement a fair bit of the asyncio (e.g. it has its own transport classes). So I thought I'd see if I could write something smaller and more maintainable, reusing as much code from the standard library as possible.
Tag: Glib
Using GAsyncResult APIs with Python's asyncio
With a GLib implementation of the Python asyncio event
loop, I can easily mix
asyncio code with GLib/GTK code in the same thread. The next step is
to see whether we can use this to make any APIs more convenient to
use. A good candidate is APIs that make use of GAsyncResult
.
These APIs generally consist of one function call that initiates the
asynchronous job and takes a callback. The callback will be invoked
sometime later with a GAsyncResult
object, which can be passed to a
"finish" function to convert this to the result type relevant to the
original call. This sort of API is a good candidate to convert to an
asyncio coroutine.
GLib integration for the Python asyncio event loop
As an evening project, I've been working on a small library that integrates the GLib main loop with Python's asyncio. I think I've gotten to the point where it might be useful to other people, so have pushed it up here:
https://github.com/jhenstridge/asyncio-glib
This isn't the only attempt to integrate the two event loops, but the other I found (Gbulb) is unmaintained and seems to reimplement a fair bit of the asyncio (e.g. it has its own transport classes). So I thought I'd see if I could write something smaller and more maintainable, reusing as much code from the standard library as possible.
Tag: Continuous Integration
Exploring Github Actions
To help keep myself honest, I wanted to set up automated test runs on a few personal projects I host on Github. At first I gave Travis a try, since a number of projects I contribute to use it, but it felt a bit clunky. When I found Github had a new CI system in beta, I signed up for the beta and was accepted a few weeks later.
While it is still in development, the configuration language feels lean and powerful. In comparison, Travis's configuration language has obviously evolved over time with some features not interacting properly (e.g. matrix expansion only working on the first job in a workflow using build stages). While I've never felt like I had a complete grasp of the Travis configuration language, the single page description of Actions configuration language feels complete.
Tag: Snapd
Building IoT projects with Ubuntu Core talk
Last week I gave a talk at Perth Linux Users Group about building IoT projects using Ubuntu Core and Snapcraft. The video is now available online. Unfortunately there were some problems with the audio setup leading to some background noise in the video, but it is still intelligible:
The slides used in the talk can be found here.
Tag: ThinkPad
ThinkPad Infrared Camera
One of the options available when configuring the my ThinkPad was an Infrared camera. The main selling point being "Windows Hello" facial recognition based login. While I wasn't planning on keeping Windows on the system, I was curious to see what I could do with it under Linux. Hopefully this is of use to anyone else trying to get it to work.
The camera is manufactured by Chicony Electronics (probably a CKFGE03 or similar), and shows up as two USB devices:
Tag: Linux.conf.au
Linux.conf.au 2014: Unity Scopes
At Linux.conf.au 2014 in Perth, I gave a talk about the “scopes” system I had worked on at Canonical as part of the Unity API team. Scopes were pluggable search providers for the Unity dash. A future version of the framework drove the launcher on Ubuntu Phone.
linux.conf.au 2011
I've just got through the first one and a half days of LCA2011 in Brisbane. The organisers have done a great job, especially considering the flooding they have had to deal with.
Due to the venue change the accommodation I booked is no longer within walking distance of the conference, but the public transport is pretty good. A bit more concerning was the following change to the wiki made between the time I left Perth and the time I checked in:
In Hobart
Today was the first day of the mini-conferences that lead up to linux.conf.au later on this week. I arrived yesterday after an eventful flight from Perth.
I was originally meant to fly out to Melbourne on the red eye leaving on Friday at 11:35pm, but just before I checked in they announced that the flight had been delayed until 4:00am the following day. As I hadn't had a chance to check in, I was able to get a pair of taxi vouchers to get home and back. I only got about 2 hours of sleep though, as they said they would turn off the baggage processing system at 3am. When I got back to the airport, I could see all the people who had stayed at the terminal spread out with airplane blankets. A little before the 4:00am deadline, another announcement was made saying the plane would now be leaving at 5:00am. Apparently they had needed to fly a replacement component in from over east to fix a problem found during maintenance. Still, it seems it wasn't the most delayed Qantas flight for that weekend and it did arrive in one piece.
Linux.conf.au 2004: Scripting with PyORBit
At Linux.conf.au 2004 in Adelaide, I gave a talk about controlling GNOME applications from Python via the accessibility framework.
Linux.conf.au 2003: EggMenu
I gave a talk at Linux.conf.au 2003 about the experimental “EggMenu” framework I had been working on. This code was eventually merged into GTK 2.4 as GtkUIManager.
Tag: Launchpad
u1ftp: a demonstration of the Ubuntu One API
One of the projects I've been working on has been to improve aspects of the Ubuntu One Developer Documentation web site. While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.
I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients). To help verify that the documentation was useful, I wrote a small program to exercise those APIs. The result is u1ftp: a program that exposes a user's files via an FTP daemon running on localhost. In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.
Launchpad code scanned by Ohloh
Today Ohloh finished importing the Launchpad source code and produced the first source code analysis report. There seems to be something fishy about the reported line counts (e.g. -3,291 lines of SQL), but the commit counts and contributor list look about right. If you're interested in what sort of effort goes into producing an application like Launchpad, then it is worth a look.
Comments:
e -
Have you seen the perl language?
Ubuntu packages for Rygel
I promised Zeeshan that I'd have a look at his Rygel UPnP Media Server a few months back, and finally got around to doing so. For anyone else who wants to give it a shot, I've put together some Ubuntu packages for Jaunty and Karmic in a PPA here:
Most of the packages there are just rebuilds or version updates of existing packages, but the Rygel ones were done from scratch. It is the first Debian package I've put together from scratch and it wasn't as difficult as I thought it might be. The tips from the "Teach me packaging" workshop at the Canonical All Hands meeting last month were quite helpful.
django-openid-auth
Last week, we released the source code to django-openid-auth. This is a small library that can add OpenID based authentication to Django applications. It has been used for a number of internal Canonical projects, including the sprint scheduler Scott wrote for the last Ubuntu Developer Summit, so it is possible you've already used the code.
Rather than trying to cover all possible use cases of OpenID, it focuses on providing OpenID Relying Party support to applications using Django's django.contrib.auth authentication system. As such, it is usually enough to edit just two files in an existing application to enable OpenID login.
Storm 0.13
Yesterday, Thomas rolled the 0.13 release of Storm, which can be downloaded from Launchpad. Storm is the object relational mapper for Python used by Launchpad and Landscape, so it is capable of supporting quite large scale applications. It is seven months since the last release, so there is a lot of improvements. Here are a few simple statistics:
0.12 | 0.13 | Change | |
---|---|---|---|
Tarball size (KB) | 117 | 155 | 38 |
Mainline revisions | 213 | 262 | 49 |
Revisions in ancestry | 552 | 875 | 323 |
So it is a fairly significant update by any of these metrics. Among the new features are:
MySQL Announces Move to Bazaar
The published Bazaar branches include 8 years of history going back to MySQL 3.23.22, imported from the BitKeeper repositories. So you can see a lot more than just the history since the switch: you can use all the normal Bazaar tools to see where the code came from and how it evolved. Giuseppe Maxia has posted some instructions on how to check out the code for those who are interested.
Psycopg migrated to Bazaar
Last week we moved psycopg from Subversion to Bazaar. I did the migration using Gustavo Niemeyer's svn2bzr tool with a few tweaks to map the old Subversion committer IDs to the email address form conventionally used by Bazaar.
The tool does a good job of following tree copies and create related Bazaar branches. It doesn't have any special handling for stuff in the tags/ directory (it produces new branches, as it does for other tree copies). To get real Bazaar tags, I wrote a simple post-processing script to calculate the heads of all the branches in a tags/ directory and set them as tags in another branch (provided those revisions occur in its ancestry). This worked pretty well except for a few revisions synthesised by a previous cvs2svn migration. As these tags were from pretty old psycopg 1 releases I don't know how much it matters.
Inkscape Migrated to Launchpad
Yesterday I performed the migration of Inkscape's bugs from SourceForge.net to Launchpad. This was a full import of all their historic bug data – about 6900 bugs.
As the import only had access to the SF user names for bug reporters,
commenters and assignees, it was not possible to link them up to
existing Launchpad users in most cases. This means that duplicate person
objects have been created with email addresses like
$USERNAME@users.sourceforge.net
.
On the way to Boston
I am at Narita Airport at the moment, on the way to Boston for some of the meetings being held during UDS. It'll be good to catch up with everyone again.
Hopefully this trip won't be as eventful as the previous one to Florida :)
Schema Generation in ORMs
When Storm was released, one of the comments made was that it did not include the ability to generate a database schema from the Python classes used to represent the tables while this feature is available in a number of competing ORMs. The simple reason for this is that we haven't used schema generation in any of our ORM-using projects.
Furthermore I'd argue that schema generation is not really appropriate for long lived projects where the data stored in the database is important. Imagine developing an application along these lines:
In Florida
This week I am in Florida for a Launchpad sprint. I was meant to arrive on Sunday night, but I fell asleep in the boarding lounge and missed the San Francisco → Orlando flight (the flight out of Perth was an early morning one, and I didn't get enough sleep on the plane). The earliest alternative fligh was the same time the next day, so I ended up ariving on Monday night.
Canonical Shop Open
The new Canonical Shop was opened recently which allows you to buy anything from Ubuntu tshirts and DVDs up to a 24/7 support contract for your server.
One thing to note is that this is the first site using our new Launchpad single sign-on infrastructure. We will be rolling this out to other sites in time, which should give a better user experience to the existing shared authentication system currently in place for the wikis.
Bazaar Bundles
This article follows on from the series of tutorials on using Bazaar that I have neglected for a while. This article is about the bundle feature of Bazaar. Bundles are to Bazaar branches what patches are to tarballs or plain source trees.
Context/unified diffs and the patch utility are arguably one of most important inventions that enable distributed development:
- The patch is a self contained text file, making it easy to send as an email attachment or attach to a bug report.
- The size of the patch is proportional to the size of the changes rather than the size of the source tree. So submitting a one line fix to the Linux kernel is as easy as a one line fix for a small one person project.
- Even if the destination source tree has moved forward since the patch was created, the patch utility does a decent job of applying the changes using heuristics to match the surrounding context. Human intervention is only needed if the edits are to the same section of code.
- As patches are human readable text files, they are a convenient form to review the code changes.
Of course, patches do have their limitations:
Storm Released
This week at the EuroPython conference, Gustavo Niemeyer announced the release of Storm and gave a tutorial on using it.
Storm is a new object relational mapper for Python that was developed for use in some Canonical projects, and we've been working on moving Launchpad over to it. I'll discuss a few of the nice features of the package:
Loose Binding Between Database Connections and Classes
Storm has a much looser binding between database connections and the classes used to represent records in particular tables. The standard way of querying the database uses a store object:
gnome-vfs-obexftp 0.3
I've just released a new version of gnome-vfs-obexftp, which includes the features discussed previously. It can be downloaded from:
The highlights of the release include:
- Sync osso-gwobex and osso-gnome-vfs-extras changes from Maemo Subversion.
- Instead of asking hcid to set up the RFCOMM device for communication, use an RFCOMM socket directly. This is both faster and doesn't require enabling experimental hcid interfaces. Based on work from Bastien Nocera.
- Improve free space calculation for Nokia phones with multiple memory types (e.g. phone memory and a memory card). Now the free space for the correct memory type for a given directory should be returned. This fixes various free-space dependent operations in Nautilus such as copying files.
Any bug reports should be filed in Launchpad at:
Launchpad 1.0 Public Beta
As mentioned in the press release, we've got two new high profile projects using us for bug tracking: The Zope 3 Project and The Silva Content Management System. As part of their migration, we imported all their old bug reports (for Zope 3, and for Silva). This was done using the same import process that we used for the SchoolTool import. Getting this process documented so that other projects can more easily switch to Launchpad is still on my todo list.
SchoolTool Moves to Launchpad
Recently, the SchoolTool project has migrated to Launchpad for their bug tracker.
We performed an import of all their previous bug reports using a new bug importer I wrote. This was the third Launchpad bug importer I'd written (the previous ones being for the Ubuntu Bugzilla import and a SourceForge importer), so I wanted this one to be the last. So the design of this importer was to have a simple XML format as an intermediate step. That way we only need to target the XML format to support a new bug tracker. This will also make it possible for projects to provide their bug data in a form we can consume for the cases where they want to migrate their bugs to Launchpad but Canonical doesn't have the resources to do the migration.
UTC+9
Daylight saving started yesterday: the first time since 1991/1992 summer
for Western Australia. The legislation finally passed the upper house on
21st November (12 days before the transition date). The updated
tzdata
packages were released on 27th
November (6 days before the transition). So far, there hasn't been an
updated package released for Ubuntu (see bug
72125).
One thing brought up in the Launchpad bug was that not all applications
used the system /usr/share/zoneinfo
time zone database. So other
places that might need updating include:
bzr branch https://launchpad.net/products/foo
One of the things we've been working on for Launchpad is good integration with Bazaar. Launchpad provides a way to register or host Bazaar branches, and nominate a Bazaar branch as representing a particular product series.
For each registered branch, there is a branch information page. This
leads to a bit of confusion since Bazaar uses URLs to identify branches,
so people try running bzr branch
on a branch information page. We also
get people trying to branch the product or product series pages.
Microsummaries in Firefox 2
One of the new features in Firefox 2 is Microsummaries, which essentially allows dynamic bookmark titles. This is useful when bookmarking volatile pages, since the title can reflect the current state of the document rather than the state when the bookmark was created.
The system works by registering XSLT transformations that generate a simple text string from the page content. The registrations are either done via a <link> element, or matched via regular expressions. The system is designed to target users (who can register their own microsummary generators), website owners (who can suggest a generator through a <link> tag) and 3rd parties (who can provide generators for other sites to users).
--create-prefix not needed with bazaar.launchpad.net
When outlining the use of team branches on
Launchpad previously,
I used the --create-prefix
option when pushing the branch to
sftp://bazaar.launchpad.net
. This was to make sure the initial
push would succeed, even if the /\~username/product
directory
the branch would be created in didn't exist.
To simplify things for users, we made a change to the SFTP server in the
latest release, so that --create-prefix
is no longer necessary.
This does not affect the allowed branch directories though: the
structure is used to associate the branches with products, and decide
who can write to the branches.
Ubuntu Bugzilla Migration Comment Cleanup
Earlier in the year, we migrated the bugs from bugzilla.ubuntu.com
over to Launchpad. This process
involved changes to the bug numbers, since the
Launchpad is used for more than just
Ubuntu and already had a number of bugs
reported in the system.
People often refer to other bugs in comments, which both Bugzilla and Launchpad conveniently turn into links. The changed bug numbers meant that the bug references in the comments ended up pointing to the wrong bugs. The bug import was done one bug at a time, so if bug A referred to bug B but bug B hadn't been imported by the time we were importing bug A, then we wouldn't know what bug number it should be referring to.
Shared Branches using Bazaar and Launchpad
Earlier, David Allouche described how to host Bazaar branches on Launchpad. At the end, he alluded to the ability to create branches that can be committed to by anyone on a team. I'll describe how this works here.
Launchpad Teams
Launchpad allows people to organise themseleves into teams. Most of the things people can do in Launchpad can also be done by teams, including owning branches.
You can create a new team at the following page:
Launchpad enterered into Python bug tracker competition
The Python developers have been looking for a new bug tracker, and essentially put out a tender for people interested in providing a bug tracker. Recently I have been working on getting Launchpad's entry ready, which mainly involved working on SourceForge import.
The entry is now up, and our demonstration server is up and running with a snapshot of the Python bug tracker data.
As a side effect of this, we've got fairly good SourceForge tracker import support now, which we should be able to use if other projects want to switch away from SF.
In London
I'm in London at the moment with Carlos, Danilo, David and Steve for a Launchpad sprint focused on Bazaar and Rosetta. The weather is a nice change from Perth winter.
Next week I'll be in Vilnius, Lithuania, and then it is back to London for another week before going home.
It is a nice change from winter weather in Perth.
Comments:
Pupeno -
Hello, Since you seem to be a developer of Rosetta; to where should I send an 'official' feature request for having an easy or even automatic way of feeding-back the translations to mainstream projects ?
Hosting bzr branches on Launchpad
Have you wanted to play around with bzr but had nowhere to share your branches? You can now publish them through Launchpad. David Allouche provides the details.
In short, you can upload branches to sftp://bazaar.launchpad.net/
,
and they will be published on http://bazaar.launchpad.net/
.
pygpgme 0.1 released
Back in January I started working on a new Python wrapper for the GPGME library. I recently put out the first release:
This library allows you to encrypt, decrypt, sign and verify messages in the OpenPGP format, using gpg as the backend. In general, it stays fairly close to the C API with the following changes:
- Represent C structures as Python classes where appropriate (e.g. contexts, keys, etc). Operations on those data types are converted to methods.
- The
gpgme_data_t
type is not exposed directly. Instead, any Python object that looks like a file object can be passed (including StringIO objects). - In cases where there are
gpgme_op_XXXX()
andgpgme_op_XXXX_result()
function pairs, these have been replaced by a singlegpgme.Context.XXXX()
method. Errors are returned in the exception where appropriate. - No explicit memory management. As expected for a Python module, memory management is automatic.
The module also releases the global interpreter lock over calls that fork gpg subprocesses. This should make the module multithread friendly.
London
I've been in London for a bit over a week now at the Launchpad sprint. We've been staying in a hotel near the Excel exhibition centre in Docklands, which has a nice view of the docs and you can see the planes landing at the airport out the windows of the conference rooms.
I met up with James Bromberger (one of the two main organisers of linux.conf.au 2003) on Thursday, which is the first time I've seen him since he left for the UK after the conference.
Launchpad featured on ELER
Launchpad got a mention in the latest Everybody Loves Eric Raymond comic. It is full of inaccuracies though — we use XML-RPC rather than SOAP.
Comments:
opi -
Oh, c'mon. It was quite fun. :-)
Bugzilla to Malone Migration
The Bugzilla migration on Friday went quite well, so we've now got all the old Ubuntu bug reports in Launchpad. Before the migration, we were up to bug #6760. Now that the migration is complete, there are more than 28000 bugs in the system. Here are some quick points to help with the transition:
-
All
bugzilla.ubuntu.com
accounts were migrated to Launchpad accounts with a few caveats:- If you already had a Launchpad account with your bugzilla email address associated with it, then the existing Launchpad account was used.
- No passwords were migrated from Bugzilla, due to differences in the method of storing them. You can set the password on the account at https://launchpad.net/+forgottenpassword.
- If you had a Launchpad account but used a different email to the one on your Bugzilla account, then you now have two Launchpad accounts. You can merge the two accounts at https://launchpad.net/people/+requestmerge.
-
If you have a
bugzilla.ubuntu.com
bug number, you can find the corresponding Launchpad bug number with the following URL:
Ubuntu Bugzilla Migration
The migration is finally going to happen, after much testing of migration code and improvements to Malone.
If all goes well, Ubuntu will be using a single bug tracker again on
Friday (as opposed to the current system where bugs in main
go in
Bugzilla and bugs in universe
go in Malone).
Comments:
Keshav -
Hiiii,
I am Keshav and i am 22. I am working as software dev.engineer in Software Company . I am currently working on Bugzilla. I think i can get some help in understanding how i can migrate bugzilla . Can you provide me the tips and list the actions so that i can come close in making a effective migration functionality
Moving from Bugzilla to Launchpad
One of the things that was discussed at
UBZ was moving Ubuntu's bug
tracking over to Launchpad. The current
situation sees bugs in main
being filed in
bugzilla while bugs in universe
go in
Launchpad. Putting all the bugs in Launchpad is an improvement, since
users only need to go to one system to file bugs.
I wrote the majority of the conversion script before the conference, but made a few important improvements at the conference after discussions with some of the developers. Since the bug tracking system is probably of interest to people who weren't at the conf, I'll outline some of the details of the conversion below:
Version control discussion on the Python list
The Python developers have been discussing a migration off CVS on the python-dev mailing list. During the discussion, Bazaar-NG was mentioned. A few posts of note:
- Mark Shuttleworth provides some information on the Bazaar roadmap. Importantly, Bazaar-NG will become Bazaar 2.0.
- Steve Alexander describes how we use Bazaar to develop Launchpad. This includes a description of the branch review process we use to integrate changes into the mainline.
I'm going to have to play around with bzr
a bit more, but it looks
very nice (and should require less typing than baz
...)
Version Control Workflow
Havoc: we are looking at ways to better integrate version control in Launchpad. There are many areas that could benefit from better use of version control, but I'll focus on bug tracking since you mentioned it.
Take the attachment handling in Bugzilla, for instance. In non-ancient versions, you can attach statuses to attachments such as "obsolete" (which has some special handling in the UI — striking out obsolete attachments and making it easy to mark attachments as obsolete when uploading a new attachment). This makes it easy to track and manage a sequence of patches as a fix for a bug is developed (bug 118372 is a metacity bug with such a chain of patches).
Back from Brazil
I got back from the Launchpad sprint in São Carlos on Tuesday afternoon. It was hard work, but a lot of work got done. Launchpad is really coming together now, and will become even better as some of the things discussed at the sprint get implemented.
One of the things discussed was to formalise some of the development workflow we've been using to develop Launchpad inside Launchpad itself so that it will be usable by other projects.
Tag: OAuth
u1ftp: a demonstration of the Ubuntu One API
One of the projects I've been working on has been to improve aspects of the Ubuntu One Developer Documentation web site. While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.
I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients). To help verify that the documentation was useful, I wrote a small program to exercise those APIs. The result is u1ftp: a program that exposes a user's files via an FTP daemon running on localhost. In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.
Thoughts on OAuth
I've been playing with OAuth a bit lately. The OAuth specification fulfills a role that some people saw as a failing of OpenID: programmatic access to websites and authenticated web services. The expectation that OpenID would handle these cases seems a bit misguided since the two uses cases are quite different:
- OpenID is designed on the principle of letting arbitrary OpenID providers talk to arbitrary relying parties and vice versa.
- OpenID is intentionally vague about how the provider authenticates the user. The only restriction is that the authentication must be able to fit into a web browsing session between the user and provider.
While these are quite useful features for a decentralised user authentication scheme, the requirements for web service authentication are quite different:
Tag: Ubuntu One
u1ftp: a demonstration of the Ubuntu One API
One of the projects I've been working on has been to improve aspects of the Ubuntu One Developer Documentation web site. While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.
I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients). To help verify that the documentation was useful, I wrote a small program to exercise those APIs. The result is u1ftp: a program that exposes a user's files via an FTP daemon running on localhost. In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.
Using Mozmill to Test Firefox Extensions
Recently I've been working on a Firefox extension, and needed a way to test the code. While testing code is always important, it is particularly important for dynamic languages where code that hasn't been run is more likely to be buggy.
I had not experience in how to do this for Firefox extensions, soEric suggested I try out Mozmill. which has been quite helpful so far. There were no Ubuntu packages for it, so I've put some together in my PPA for anyone interested:
Tag: Firefox
Javascript Mandelbrot Set Fractal Renderer
While at linux.conf.au earlier this year, I started hacking on a Mandelbrot Set fractal renderer implemented in JavaScript as a way to polish my JS skills. In particular, I wanted to get to know the HTML5 Canvas and Worker APIs.
The results turned out pretty well. Click on the image below to try it out:

Clicking anywhere on the fractal will zoom in. You'll need to reload the page to zoom out. Zooming in while the fractal is still being rendered will interrupt the previous rendering job.
Using Mozmill to Test Firefox Extensions
Recently I've been working on a Firefox extension, and needed a way to test the code. While testing code is always important, it is particularly important for dynamic languages where code that hasn't been run is more likely to be buggy.
I had not experience in how to do this for Firefox extensions, soEric suggested I try out Mozmill. which has been quite helpful so far. There were no Ubuntu packages for it, so I've put some together in my PPA for anyone interested:
Tag: Lca2011
Javascript Mandelbrot Set Fractal Renderer
While at linux.conf.au earlier this year, I started hacking on a Mandelbrot Set fractal renderer implemented in JavaScript as a way to polish my JS skills. In particular, I wanted to get to know the HTML5 Canvas and Worker APIs.
The results turned out pretty well. Click on the image below to try it out:

Clicking anywhere on the fractal will zoom in. You'll need to reload the page to zoom out. Zooming in while the fractal is still being rendered will interrupt the previous rendering job.
linux.conf.au 2011
I've just got through the first one and a half days of LCA2011 in Brisbane. The organisers have done a great job, especially considering the flooding they have had to deal with.
Due to the venue change the accommodation I booked is no longer within walking distance of the conference, but the public transport is pretty good. A bit more concerning was the following change to the wiki made between the time I left Perth and the time I checked in:
Tag: Mozilla
Using Mozmill to Test Firefox Extensions
Recently I've been working on a Firefox extension, and needed a way to test the code. While testing code is always important, it is particularly important for dynamic languages where code that hasn't been run is more likely to be buggy.
I had not experience in how to do this for Firefox extensions, soEric suggested I try out Mozmill. which has been quite helpful so far. There were no Ubuntu packages for it, so I've put some together in my PPA for anyone interested:
Tag: Testing
Using Mozmill to Test Firefox Extensions
Recently I've been working on a Firefox extension, and needed a way to test the code. While testing code is always important, it is particularly important for dynamic languages where code that hasn't been run is more likely to be buggy.
I had not experience in how to do this for Firefox extensions, soEric suggested I try out Mozmill. which has been quite helpful so far. There were no Ubuntu packages for it, so I've put some together in my PPA for anyone interested:
Tag: Bazaar
Launchpad code scanned by Ohloh
Today Ohloh finished importing the Launchpad source code and produced the first source code analysis report. There seems to be something fishy about the reported line counts (e.g. -3,291 lines of SQL), but the commit counts and contributor list look about right. If you're interested in what sort of effort goes into producing an application like Launchpad, then it is worth a look.
Comments:
e -
Have you seen the perl language?
Getting "bzr send" to work with GMail
One of the nice features of Bazaar is the ability to send a bundle of changes to someone via email. If you use a supported mail client, it will even open the composer with the changes attached. If your client isn't supported, then it'll let you compose a message in your editor and then send it to an SMTP server.
GMail is not a supported mail client, but there are a few work arounds listed on the wiki. Those really come down to using an alternative mail client (either the editor or Mutt) and sending the mails through the GMail SMTP server. Neither solution really appealed to me. There doesn't seem to be a programatic way of opening up GMail's compose window and adding an attachment (not too surprising for a web app).
Metrics for success of a DVCS
One thing that has been mentioned in the GNOME DVCS debate was that it is as easy to do "git diff" as it is to do "svn diff" so the learning curve issue is moot. I'd have to disagree here.
Traditional Centralised Version Control
With traditional version control systems (e.g. CVS and Subversion) as used by Free Software projects like GNOME, there are effectively two classes of users that I will refer to as "committers" and "patch contributors":
DVCS talks at GUADEC
Yesterday, a BoF was scheduled for discussion of distributed version control systems with GNOME. The BoF session did not end up really discussing the issues of what GNOME needs out of a revision control system, and some of the examples Federico used were a bit snarky.
We had a more productive meeting in the session afterwards where we went over some of the concrete goals for the system. The list from the blackboard was:
MySQL Announces Move to Bazaar
The published Bazaar branches include 8 years of history going back to MySQL 3.23.22, imported from the BitKeeper repositories. So you can see a lot more than just the history since the switch: you can use all the normal Bazaar tools to see where the code came from and how it evolved. Giuseppe Maxia has posted some instructions on how to check out the code for those who are interested.
bzr commit --author
One of the features I recently discovered in
Bazaar is the --author
option for
"bzr commit
". This lets you make commits to a Bazaar branch on
behalf of another person. When used, the new revision credits two
people: you as the committer and the other person as the author.
While Bazaar does make it easy for non-core contributors to send changes in a form that correctly attributes them (e.g. by publishing a branch or sending a bundle), I doubt we'll ever see the end of pure patches. Some cases include:
Psycopg migrated to Bazaar
Last week we moved psycopg from Subversion to Bazaar. I did the migration using Gustavo Niemeyer's svn2bzr tool with a few tweaks to map the old Subversion committer IDs to the email address form conventionally used by Bazaar.
The tool does a good job of following tree copies and create related Bazaar branches. It doesn't have any special handling for stuff in the tags/ directory (it produces new branches, as it does for other tree copies). To get real Bazaar tags, I wrote a simple post-processing script to calculate the heads of all the branches in a tags/ directory and set them as tags in another branch (provided those revisions occur in its ancestry). This worked pretty well except for a few revisions synthesised by a previous cvs2svn migration. As these tags were from pretty old psycopg 1 releases I don't know how much it matters.
Looms Rock
While doing a bit of work on Storm, I decided to try out the loom plugin for Bazaar. The loom plugin is designed to help maintain a stack of changes to a base branch (similar to quilt). Some use cases where this sort of tool are useful include:
- Maintaining a long-running diff to a base branch. Distribution packaging is one such example.
- While developing a new feature, the underlying code may require some refactoring. A loom could be used to keep the refactoring separate from the feature work so that it can be merged ahead of the feature.
- For complex features, code reviewers often prefer to changes to be broken down into a sequence of simpler changes. A loom can help maintain the stack of changes in a coherent fashion.
A loom branch helps to manage these different threads in a coherent manner. Each thread in the loom contains all the changes from the threads below it, so the revision graph ends up looking something like this:
bzr-dbus hacking
When working on my bzr-avahi plugin, Robert asked me about how it should fit in with his bzr-dbus plugin. The two plugins offer complementary features, and could share a fair bit of infrastructure code. Furthermore, by not cooperating, there is a risk that the two plugins could break when both installed together.
Given the dependencies of the two packages, it made more sense to put common infrastructure in bzr-dbus and have bzr-avahi depend on it. That said, bzr-dbus is a bit more difficult to install than bzr-avahi, since it requires installation of a D-Bus service activation file. After looking at the code, it seemed that there was room to simplify how bzr-dbus worked and improve its reliability at the same time.
Zeroconf Branch Sharing with Bazaar
When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.
States in Version Control Systems
Elijah has been writing an interesting series of articles comparing different version control systems. While the previous articles have been very informative, I think the latest one was a bit muddled. What follows is an expanded version of my comment on that article.
Elijah starts by making an analogy between text editors and version control systems, which I think is quite a useful analogy. When working with a text editor, there is a base version of the file on disk, and the version you are currently working on which will become the next saved version.
Signed Revisions with Bazaar
One useful feature of Bazaar is the ability to cryptographically sign revisions. I was discussing this with Ryan on IRC, and thought I'd write up some of the details as they might be useful to others.
Anyone who remembers the past security of GNOME and Debian servers should be able to understand the benefits of being able to verify the integrity of a source code repository after such an incident. Rather than requiring all revisions made since the last known safe backup to be examined, much of the verification could be done mechanically.
Bazaar bundles as part of a review process
In my previous article, I outlined Bazaar's bundle feature. This article describes how the Bazaar developers use bundles as part of their development and code review process.
Proposed changes to Bazaar are generally posted as patches or bundles to the development mailing list. Each change is discussed on the mailing list (often going through a number of iterations), and ultimately approved or rejected by the core developers. To aide in managing these patches Aaron Bentley (one of the developers wrote a tool called Bundle Buggy.
Bazaar Bundles
This article follows on from the series of tutorials on using Bazaar that I have neglected for a while. This article is about the bundle feature of Bazaar. Bundles are to Bazaar branches what patches are to tarballs or plain source trees.
Context/unified diffs and the patch utility are arguably one of most important inventions that enable distributed development:
- The patch is a self contained text file, making it easy to send as an email attachment or attach to a bug report.
- The size of the patch is proportional to the size of the changes rather than the size of the source tree. So submitting a one line fix to the Linux kernel is as easy as a one line fix for a small one person project.
- Even if the destination source tree has moved forward since the patch was created, the patch utility does a decent job of applying the changes using heuristics to match the surrounding context. Human intervention is only needed if the edits are to the same section of code.
- As patches are human readable text files, they are a convenient form to review the code changes.
Of course, patches do have their limitations:
FM Radio in Rhythmbox – The Code
Previously, I posted about the FM radio plugin I was working on. I just posted the code to bug 168735. A few notes about the implementation:
- The code only supports Video4Linux 2 radio tuners (since that’s the interface my device supports, and the V4L1 compatibility layer doesn’t work for it). It should be possible to port it support both protocols if someone is interested.
- It does not pass the audio through the GStreamer pipeline. Instead, you need to configure your mixer settings to pass the audio through (e.g. unmute the Line-in source and set the volume appropriately). It plugs in a GStreamer source that generates silence to work with the rest of the Rhythmbox infrastructure. This does mean that the volume control and visualisations won’t work
- No properties dialog yet. If you want to set titles on the stations,
you’ll need to edit
rhythmdb.xml
directly at the moment. - The code assumes that the radio device is
/dev/radio0
.
Other than that, it all works quite well (I've been using it for the last few weeks).
ZeroConf support for Bazaar
When at conferences and sprints, I often want to see what someone else is working on, or to let other people see what I am working on. Usually we end up pushing up to a shared server and using that as a way to exchange branches. However, this can be quite frustrating when competing for outside bandwidth when at a conference.
It is possible to share the branch from a local web server, but that still means you need to work out the addressing issues.
Python time.timezone / time.altzone edge case
While browsing the log of one of my Bazaar branches, I noticed that the commit messages were being recorded as occurring in the +0800 time zone even though WA switched over to daylight savings.
Bazaar stores commit dates as a standard UNIX seconds since epoch value
and a time zone offset in seconds. So the problem was with the way that
time zone offset was recorded. The code in bzrlib
that calculates the
offset looks like this:
Recovering a Branch From a Bazaar Repository
In my previous entry, I mentioned that Andrew was actually publishing the contents of all his Bazaar branches with his rsync script, even though he was only advertising a single branch. Yesterday I had a need to actually do this, so I thought I'd detail how to do it.
As a refresher, a Bazaar repository stores the revision graph for the ancestry of all the branches stored inside it. A branch is essentially just a pointer to the head revision of a particular line of development. So if the branch has been deleted but the data is still in the repository, recovering it is a simple matter of discovering the identifier for the head revision.
Re: Pushing a bzr branch with rsync
This article responds to some of the points in Andrew's post about Pushing a bzr branch with rsync.
bzr rspush
and shared repositories
First of all, to understand why bzr rspush
refuses to operate on a
non-standalone branch, it is worth looking at what it does:
- Download the revision history of the remote branch, and check to see that the remote head revision is an ancestor of the local head revision. If it is not, error out.
- If it is an ancestor, use rsync to copy the local branch and repository information to the remote location.
Now if you bring shared repositories into the mix, and there is a different set of branches in the local and remote repositories, then step (2) is liable to delete revision information needed by those branches that don't exist locally. This is not a theoretical concern if you do development from multiple machines (e.g. a desktop and a laptop) and publish to the same repository.
bzr branch https://launchpad.net/products/foo
One of the things we've been working on for Launchpad is good integration with Bazaar. Launchpad provides a way to register or host Bazaar branches, and nominate a Bazaar branch as representing a particular product series.
For each registered branch, there is a branch information page. This
leads to a bit of confusion since Bazaar uses URLs to identify branches,
so people try running bzr branch
on a branch information page. We also
get people trying to branch the product or product series pages.
--create-prefix not needed with bazaar.launchpad.net
When outlining the use of team branches on
Launchpad previously,
I used the --create-prefix
option when pushing the branch to
sftp://bazaar.launchpad.net
. This was to make sure the initial
push would succeed, even if the /\~username/product
directory
the branch would be created in didn't exist.
To simplify things for users, we made a change to the SFTP server in the
latest release, so that --create-prefix
is no longer necessary.
This does not affect the allowed branch directories though: the
structure is used to associate the branches with products, and decide
who can write to the branches.
Gnome-gpg 0.5.0 Released
Over the weekend, I released gnome-gpg
0.5.0.
The main features in this release is support for running without
gnome-keyring-daemon
(of course, you can't save the passphrase
in this mode), and to use the same keyring item name for the passphrase
as Seahorse. The release can be
downloaded here:
I also switched over from Arch to
Bazaar. The conversion was fairly painless
using bzr baz-import-branch
, and means that I have both my
revisions and Colins revisions in a single tree. The branch can be
pulled from:
Shared Branches using Bazaar and Launchpad
Earlier, David Allouche described how to host Bazaar branches on Launchpad. At the end, he alluded to the ability to create branches that can be committed to by anyone on a team. I'll describe how this works here.
Launchpad Teams
Launchpad allows people to organise themseleves into teams. Most of the things people can do in Launchpad can also be done by teams, including owning branches.
You can create a new team at the following page:
Hosting bzr branches on Launchpad
Have you wanted to play around with bzr but had nowhere to share your branches? You can now publish them through Launchpad. David Allouche provides the details.
In short, you can upload branches to sftp://bazaar.launchpad.net/
,
and they will be published on http://bazaar.launchpad.net/
.
JHBuild Improvements
I've been doing most JHBuild development in my bzr branch recently. If you have bzr 0.8rc1 installed, you can grab it here:
bzr branch http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.dev
I've been keeping a regular CVS import going at
http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.cvs
using Tailor, so
changes people make to module sets in CVS make there way into the bzr
branch. I've used a small hack so that merges back into CVS get
recorded correctly in the jhbuild.cvs
branch:
New Default Branch Format in Bzr
One of the new features in the soon to be released bzr 0.8 is the new "knit" storage format.
When comparing the size of the repository data for jhbuild with "knit" and "metadir" formats (metadir is just the old storage format with repository, branch and checkout bookkeeping separated), I see the following:
metadir knit
Size 9.9MB 5.5MB Number of files 1267 307
The reason for the smaller number of files is that information about all revisions in the repository is now stored together rather than in separate files. So the file count comes out at a constant plus 2 times the number of tracked files (a knit index file plus the knit data file). For comparison, the CVS repository I imported this from was 4.4MB, and comprised 143 files.
Repositories in Bzr
One of the new features comming up in the next release of bzr is support for shared repositories. This provides a way to reduce disk space needed to store multiple related branches. To understand how repositories work, it helps to know a bit about how branches are stored by bzr.

There are three concepts that make up a bzr branch:
- A checkout or working tree. This is the source files you are working with. It represents the state of the source code at some recorded revision plus any local changes you've made. In the diagram on the right, it is represented as the red node.
- The branch, consisting of a linear sequence of revisions. This is represented by the blue nodes in the diagram. Note that there may be multiple paths from the first revision to the current revision due to branching and merging. The branch revision history indicates the path that was taken by this particular branch.
- The repository, being a store of the text of all the revisions in the ancestry of the branch, plus metadata about those revisions. This essentially stores information about every node and edge in the diagram.
In previous versions of bzr, this information was not clearly separated. However with the new default branch format in bzr 0.8 they are separated, and a particular directory need not contain all three parts, which is what makes the space savings and performance improvements possible.
Using Tailor to Convert a Gnome CVS Module
In my previous post, I mentioned using Tailor to import jhbuild into a Bazaar-NG branch. In case anyone else is interested in doing the same, here are the steps I used:
1. Install the tools
First create a working directory to perform the import, and set up tailor. I currently use the nightly snapshots of bzr, which did not work with Tailor, so I also grabbed bzr-0.7:
$ wget http://darcs.arstecnica.it/tailor-0.9.20.tar.gz
$ wget http://www.bazaar-ng.org/pkg/bzr-0.7.tar.gz
$ tar xzf tailor-0.9.20.tar.gz
$ tar xzf bzr-0.7.tar.gz
$ ln -s ../bzr-0.7/bzrlib tailor-0.9.20/bzrlib
2. Prepare a local CVS Repository to import from
Revision Control Migration and History Corruption
As most people probably know, the Gnome project is planning a migration
to Subversion. In contrast, I've
decided to move development of jhbuild over to
bzr
. This decision is a bit easier for
me than for other Gnome modules because:
- No need to coordinate with GDP or GTP, since I maintain the docs and there is no translations.
- Outside of the moduleset definitions, the large majority of development and commits are done by me.
- There aren't really any interesting branches other than the mainline.
I plan to leave the Gnome module set definitions in CVS/Subversion though, since many people help in keeping them up to date, so leaving them there has some value.
OpenSSH support in bzr
I updated my bzr openssh plugin to be a
proper patch against bzr.dev
, and got it merged. So if you have
bzr-openssh-sftp.py
in your ~/.bazaar/plugins
directory, you
should remove it when upgrading.
Unfortunately there was a small problem resolving a conflict when
merging it, which causes the path to get mangled a little inside
_sftp_connect()
. Once this is resolved, the mainline bzr
should
fully follow settings in ~/.ssh/config
, because it will be running the
same ssh binary as you normally use.
Using OpenSSH with bzr
One of the transports available in
bzr
is sftp
. This is
implemented using the Paramiko SSH and
SFTP library. Unfortunately there are a few issues I experienced with
the code:
- Since it is an independent implementation of SSH, none of my OpenSSH
settings in
~/.ssh/config
were recognised. The particular options I rely on include:User
: when the remote username doesn't match my local one. One less thing to remember when connecting to a remote machine.IdentityFile
: use different keys to access different machines.ProxyCommand
: access work machines that are behind the firewall.
- Paramiko does not currently support SSH compression. This is a real pain for larger trees.
The easiest way to fix all these problems would be to use OpenSSH
directly, so wrote a small plugin to do so. I decided to follow the
model used to do this in gnome-vfs
and Bazaar 1.x: communicate with an
ssh
subprocess via pipes and implement the SFTP protocol internally.
Comparison of Configs/Aliases in Bazaar, CVS and Subversion
When a project grows to a certain size, it will probably need a way to
share code between multiple software packages they release. In the
context of Gnome, one example is the sharing of the libbackground
code
between Nautilus and gnome-control-center. The simplest way to do this
is to just copy over the files in question and manually synchronise
them. This is a pain to do, and can lead to problems if changes are made
to both copies, so you'd want to avoid it if possible. So most version
control systems provide some way to share code in this way. As with the
previous articles, I'll focus on Bazaar, CVS and Subversion
Version control discussion on the Python list
The Python developers have been discussing a migration off CVS on the python-dev mailing list. During the discussion, Bazaar-NG was mentioned. A few posts of note:
- Mark Shuttleworth provides some information on the Bazaar roadmap. Importantly, Bazaar-NG will become Bazaar 2.0.
- Steve Alexander describes how we use Bazaar to develop Launchpad. This includes a description of the branch review process we use to integrate changes into the mainline.
I'm going to have to play around with bzr
a bit more, but it looks
very nice (and should require less typing than baz
...)
Version Control Workflow
Havoc: we are looking at ways to better integrate version control in Launchpad. There are many areas that could benefit from better use of version control, but I'll focus on bug tracking since you mentioned it.
Take the attachment handling in Bugzilla, for instance. In non-ancient versions, you can attach statuses to attachments such as "obsolete" (which has some special handling in the UI — striking out obsolete attachments and making it easy to mark attachments as obsolete when uploading a new attachment). This makes it easy to track and manage a sequence of patches as a fix for a bug is developed (bug 118372 is a metacity bug with such a chain of patches).
Bryan's Bazaar Tutorial
Bryan: there are a number of steps you can skip in your little tutorial:
-
You don't need to set
my-default-archive
. If you often work with multiple archives, you can treat working copies for all archives pretty much the same. If you are currently inside a working copy, any branch names you use will be relative to your current one, so you can still use short branch names in almost all cases (this is similar to the reason I don't set$CVSROOT
when working with CVS).
Merging In Bazaar
This posting follows on from my previous postings about Bazaar, but is a bit more advanced. In most cases you don’t need to worry about this, since the tools should just work. However if problems occur (or if you’re just curious about how things work), it can be useful to know a bit about what’s going on inside.
Changesets vs. Tree Snapshots
A lot of the tutorials for Arch list “changeset orientation” as one of its benefits over other systems such as Subversion, which were said to be based on “tree snapshots”. At first this puzzled me, since from my mathematical background the relationship between these two concepts seemed the same as the relationship between integrals and derivatives:
Bazaar (continued)
I got a few responses to the comparison between CVS, Subversion and Bazaar command line interfaces I posted earlier from Elijah, Mikael and David. As I stated in that post, I was looking at areas where the three systems could be compared. Of course, most people would choose Arch because of the things it can do with it that Subversion and CVS can't. Below I'll discuss two of those things: disconnected development and distributed development. I'll follow on from the examples in the previous post.
SCM Command Line Interface Comparison
With the current discussion on gnome-hackers about whether to switch Gnome over to Subversion, it has been brought up a number of times that people can switch from CVS to Subversion without thinking about it (the implication being that this is not true for Arch). Given the improvements in Bazaar, it isn't clear that Subversion is the only system that can claim this benefit.
For the sake of comparison, I'm considering the case of a shared repository accessed by multiple developers over SSH. While this doesn't exploit all the benefits of Arch, it gives a better comparison of the usability of the different tools.
6 January 2005
Travels
I've put some of the photos from my trip to Mataró, and the short stop over in Japan on the way back. The Mataró set includes a fair number taken around La Sagrida Familia, and the Japan set is mostly of things around the Naritasan temple (I didn't have enough time to get into Tokyo).
Multi-head
A few months back, I got a second monitor for my computer and configured
it in a Xinerama-style setup (I'm actually using the MergedFB
feature
of the radeon driver, but it looks like Xinerama to X clients). Overall
it has been pretty nice, but there are a few things that Gnome could do
a bit nicer in the setup:
15 December 2004
Mataró
The conference has been great so far. The PyGTK BoF on the weekend was very productive, and I got to meet Anthony Baxter (who as well as being the Python release manager, wrote a cool VoiP application called Shtoom). There was an announcement of some of the other things Canonical have been working on, which has been reported on in LWN (currently subscriber only) among other places.
Over the weekend, I had a little time to do some tourist-type things in Barcelona. I went to La Sagrada Família. It was a great place to visit, and there was an amazing level of detail in the architecture. You can walk almost to the very top of the cathedral, and see out over the Barcelona skyline (and see various bits of the cathedral not visible from the ground). I'll have to put my photos up online.
1 November 2004
Libtool
When looking into the libtool problem I mentioned earlier, I decided to
take a look at the libtool-2.0 betas. Overall, it looks pretty good.
I've updated the
gnome-common autogen.sh
script to support it. So if a package uses the LT_INIT
macro, it will
call libtoolize
for you.
One of the new features in these versions of libtool is that if you have
a AC_CONFIG_MACRO_DIR(directory)
call in your configure.ac
file, it
will copy the libtool M4 macros to that directory. If you then call
aclocal
with the correct -I
flag, autoconf will use that version of
the macro.
Tag: DLNA
Seeking in Transcoded Streams with Rygel
When looking at various UPnP media servers, one of the features I wanted was the ability to play back my music collection through my PlayStation 3. The complicating factor is that most of my collection is encoded in Vorbis format, which is not yet supported by the PS3 (at this point, it doesn't seem likely that it ever will).
Both MediaTomb and Rygel could handle this to an extent, transcoding the audio to raw LPCM data to send over the network. This doesn't require much CPU power on the server side, and only requires 1.4 Mbit/s of bandwidth, which is manageable on most home networks. Unfortunately the only playback controls enabled in this mode are play and stop: if you want to pause, fast forward or rewind then you're out of luck.
Streaming Vorbis files from Ubuntu to a PS3
One of the nice features of the PlayStation 3 is the UPNP/DLNA media renderer. Unfortunately, the set of codecs is pretty limited, which is a problem since most of my music is encoded as Vorbis. MediaTomb was suggested to me as a server that could transcode the files to a format the PS3 could understand.
Unfortunately, I didn’t have much luck with the version included with Ubuntu 8.10 (Intrepid), and after a bit of investigation it seems that there isn’t a released version of MediaTomb that can send PCM audio to the PS3. So I put together a package of a subversion snapshot in my PPA which should work on Intrepid.
Tag: PlayStation 3
Seeking in Transcoded Streams with Rygel
When looking at various UPnP media servers, one of the features I wanted was the ability to play back my music collection through my PlayStation 3. The complicating factor is that most of my collection is encoded in Vorbis format, which is not yet supported by the PS3 (at this point, it doesn't seem likely that it ever will).
Both MediaTomb and Rygel could handle this to an extent, transcoding the audio to raw LPCM data to send over the network. This doesn't require much CPU power on the server side, and only requires 1.4 Mbit/s of bandwidth, which is manageable on most home networks. Unfortunately the only playback controls enabled in this mode are play and stop: if you want to pause, fast forward or rewind then you're out of luck.
More Rygel testing
In my last post, I said I had trouble getting Rygel's tracker backend to function and assumed that it was expecting an older version of the API. It turns out I was incorrect and the problem was due in part to Ubuntu specific changes to the Tracker package and the unusual way Rygel was trying to talk to Tracker.
The Tracker packages in Ubuntu remove the D-Bus service activation file for the "org.freedesktop.Tracker" bus name so that if the user has not chosen to run the service (or has killed it), it won't be automatically activated. Unfortunately, instead of just calling a Tracker D-Bus method, Rygel was trying to manually activate Tracker via a StartServiceByName() call. This would fail even if Tracker was running, hence my assumption that it was a tracker API version problem.
Ubuntu packages for Rygel
I promised Zeeshan that I'd have a look at his Rygel UPnP Media Server a few months back, and finally got around to doing so. For anyone else who wants to give it a shot, I've put together some Ubuntu packages for Jaunty and Karmic in a PPA here:
Most of the packages there are just rebuilds or version updates of existing packages, but the Rygel ones were done from scratch. It is the first Debian package I've put together from scratch and it wasn't as difficult as I thought it might be. The tips from the "Teach me packaging" workshop at the Canonical All Hands meeting last month were quite helpful.
Streaming Vorbis files from Ubuntu to a PS3
One of the nice features of the PlayStation 3 is the UPNP/DLNA media renderer. Unfortunately, the set of codecs is pretty limited, which is a problem since most of my music is encoded as Vorbis. MediaTomb was suggested to me as a server that could transcode the files to a format the PS3 could understand.
Unfortunately, I didn’t have much luck with the version included with Ubuntu 8.10 (Intrepid), and after a bit of investigation it seems that there isn’t a released version of MediaTomb that can send PCM audio to the PS3. So I put together a package of a subversion snapshot in my PPA which should work on Intrepid.
Tag: UPnP
Seeking in Transcoded Streams with Rygel
When looking at various UPnP media servers, one of the features I wanted was the ability to play back my music collection through my PlayStation 3. The complicating factor is that most of my collection is encoded in Vorbis format, which is not yet supported by the PS3 (at this point, it doesn't seem likely that it ever will).
Both MediaTomb and Rygel could handle this to an extent, transcoding the audio to raw LPCM data to send over the network. This doesn't require much CPU power on the server side, and only requires 1.4 Mbit/s of bandwidth, which is manageable on most home networks. Unfortunately the only playback controls enabled in this mode are play and stop: if you want to pause, fast forward or rewind then you're out of luck.
Watching iView with Rygel
One of the features of Rygel that I found most interesting was the external media server support. It looked like an easy way to publish information on the network without implementing a full UPnP/DLNA media server (i.e. handling the UPnP multicast traffic, transcoding to a format that the remote system can handle, etc).
As a small test, I put together a server that exposes the ABC's iView service to UPnP media renderers. The result is a bit rough around the edges, but the basic functionality works. The source can be grabbed using Bazaar:
More Rygel testing
In my last post, I said I had trouble getting Rygel's tracker backend to function and assumed that it was expecting an older version of the API. It turns out I was incorrect and the problem was due in part to Ubuntu specific changes to the Tracker package and the unusual way Rygel was trying to talk to Tracker.
The Tracker packages in Ubuntu remove the D-Bus service activation file for the "org.freedesktop.Tracker" bus name so that if the user has not chosen to run the service (or has killed it), it won't be automatically activated. Unfortunately, instead of just calling a Tracker D-Bus method, Rygel was trying to manually activate Tracker via a StartServiceByName() call. This would fail even if Tracker was running, hence my assumption that it was a tracker API version problem.
Ubuntu packages for Rygel
I promised Zeeshan that I'd have a look at his Rygel UPnP Media Server a few months back, and finally got around to doing so. For anyone else who wants to give it a shot, I've put together some Ubuntu packages for Jaunty and Karmic in a PPA here:
Most of the packages there are just rebuilds or version updates of existing packages, but the Rygel ones were done from scratch. It is the first Debian package I've put together from scratch and it wasn't as difficult as I thought it might be. The tips from the "Teach me packaging" workshop at the Canonical All Hands meeting last month were quite helpful.
Streaming Vorbis files from Ubuntu to a PS3
One of the nice features of the PlayStation 3 is the UPNP/DLNA media renderer. Unfortunately, the set of codecs is pretty limited, which is a problem since most of my music is encoded as Vorbis. MediaTomb was suggested to me as a server that could transcode the files to a format the PS3 could understand.
Unfortunately, I didn’t have much luck with the version included with Ubuntu 8.10 (Intrepid), and after a bit of investigation it seems that there isn’t a released version of MediaTomb that can send PCM audio to the PS3. So I put together a package of a subversion snapshot in my PPA which should work on Intrepid.
Tag: Django
django-openid-auth
Last week, we released the source code to django-openid-auth. This is a small library that can add OpenID based authentication to Django applications. It has been used for a number of internal Canonical projects, including the sprint scheduler Scott wrote for the last Ubuntu Developer Summit, so it is possible you've already used the code.
Rather than trying to cover all possible use cases of OpenID, it focuses on providing OpenID Relying Party support to applications using Django's django.contrib.auth authentication system. As such, it is usually enough to edit just two files in an existing application to enable OpenID login.
Django support landed in Storm
Since my last article on integrating Storm with Django, I've merged my changes to Storm's trunk. This missed the 0.13 release, so you'll need to use Bazaar to get the latest trunk or wait for 0.14.
The focus since the last post was to get Storm to cooperate with Django's built in ORM. One of the reasons people use Django is the existing components that can be used to build a site. This ranges from the included user management and administration code to full web shop implementations. So even if you plan to use Storm for your Django application, your application will most likely use Django's ORM for some things.
Transaction Management in Django
In my previous post about Django, I mentioned that I found the transaction handling strategy in Django to be a bit surprising.
Like most object relational mappers, it caches information retrieved from the database, since you don't want to be constantly issuing SELECT queries for every attribute access. However, it defaults to commiting after saving changes to each object. So a single web request might end up issuing many transactions:
Change object 1 | Transaction 1 |
Change object 2 | Transaction 2 |
Change object 3 | Transaction 3 |
Change object 4 | Transaction 4 |
Change object 5 | Transaction 5 |
Unless no one else is accessing the database, there is a chance that other users could modify objects that the ORM has cached over the transaction boundaries. This also makes it difficult to test your application in any meaningful way, since it is hard to predict what changes will occur at those points. Django does provide a few ways to provide better transactional behaviour.
Using Storm with Django
I've been playing around with Django a bit for work recently, which has been interesting to see what choices they've made differently to Zope 3. There were a few things that surprised me:
- The ORM and database layer defaults to autocommit mode rather than using transactions. This seems like an odd choice given that all the major free databases support transactions these days. While autocommit might work fine when a web application is under light use, it is a recipe for problems at higher loads. By using transactions that last for the duration of the request, the testing you do is more likely to help with the high load situations.
- While there is a middleware class to enable request-duration transactions, it only covers the database connection. There is no global transaction manager to coordinate multiple DB connections or other resources.
- The ORM appears to only support a single connection for a request. While this is the most common case and should be easy to code with, allowing an application to expand past this limit seems prudent.
- The tutorial promotes schema generation from Python models, which I feel is the wrong choice for any application that is likely to evolve over time (i.e. pretty much every application). I've written about this previously and believe that migration based schema management is a more workable solution.
- It poorly reinvents thread local storage in a few places. This isn't too surprising for things that existed prior to Python 2.4, and probably isn't a problem for its default mode of operation.
Other than these things I've noticed so far, it looks like a nice framework.
Tag: OpenID
django-openid-auth
Last week, we released the source code to django-openid-auth. This is a small library that can add OpenID based authentication to Django applications. It has been used for a number of internal Canonical projects, including the sprint scheduler Scott wrote for the last Ubuntu Developer Summit, so it is possible you've already used the code.
Rather than trying to cover all possible use cases of OpenID, it focuses on providing OpenID Relying Party support to applications using Django's django.contrib.auth authentication system. As such, it is usually enough to edit just two files in an existing application to enable OpenID login.
Re: Continuing to Not Quite Get It at Google...
David: taking a quick look at Google's documentation, it sure looks like OpenID to me. The main items of note are:
-
It documents the use of OpenID 2.0's directed identity mode. Yes this is "a departure from the process outlined in OpenID 1.0", but that could be considered true of all new features found in 2.0. Google certainly isn't the first to implement this feature:
- Yahoo's OpenID page recommends users enter "yahoo.com" in the identity box on web sites, which will initiate a directed identity authentication request.
- We've been using directed identity with Launchpad to implement single sign on for various Canonical/Ubuntu sites.
Given that Google account holders identify themselves by email address, users aren't likely to know a URL to enter, so this kind of makes sense.
Thoughts on OAuth
I've been playing with OAuth a bit lately. The OAuth specification fulfills a role that some people saw as a failing of OpenID: programmatic access to websites and authenticated web services. The expectation that OpenID would handle these cases seems a bit misguided since the two uses cases are quite different:
- OpenID is designed on the principle of letting arbitrary OpenID providers talk to arbitrary relying parties and vice versa.
- OpenID is intentionally vague about how the provider authenticates the user. The only restriction is that the authentication must be able to fit into a web browsing session between the user and provider.
While these are quite useful features for a decentralised user authentication scheme, the requirements for web service authentication are quite different:
Using email addresses as OpenID identities (almost)
On the OpenID specs mailing list, there was another discussion about using email addresses as OpenID identifiers. So far it has mostly covered existing ground, but there was one comment that interested me: a report that you can log in to many OpenID RPs by entering a Yahoo email address.
Now there certainly isn't any Yahoo-specific code in the standard OpenID libraries, so you might wonder what is going on here. We can get some idea by using the python-openid library:
Client Side OpenID
The following article discusses ideas that I wouldn't even class as vapourware, as I am not proposing to implement them myself. That said, the ideas should still be implementable if anyone is interested.
One well known security weakness in OpenID is its weakness to phishing attacks. An OpenID authentication request is initiated by the user entering their identifier into the Relying Party, which then hands control to the user's OpenID Provider through an HTTP redirect or form post. A malicious RP may instead forward the user to a site that looks like the user's OP and record any information they enter. As the user provided their identifier, the RP knows exactly what site to forge.
OpenID 2.0 Specification Approved
It looks like the OpenID Authentication 2.0 specification has finally been released, along with OpenID Attribute Exchange 1.0. While there are some questionable features in the new specification (namely XRIs), it seems like a worthwhile improvement over the previous specification. It will be interesting to see how quickly the new specification gains adoption.
While this is certainly an important milestone, there are still areas for improvement.
Best Practices For Managing Trust Relationships With OPs
OpenID Attribute Exchange
In my previous article on OpenID 2.0, I mentioned the new Attribute Exchange extension. To me this is one of the more interesting benefits of moving to OpenID 2.0, so it deserves a more in depth look.
As mentioned previously, the extension is a way of transferring information about the user between the OpenID provider and relying party.
Why use Attribute Exchange instead of FOAF or Microformats?
Identifier Reuse in OpenID 2.0
One of the issues that the OpenID 1.1 specification did not cover is the fact that an identity URL may not remain the property of a user over time. For large OpenID providers there are two cases they may run into:
- A user with a popular user name stops using the service, and they want to make that name available to new users.
- A user changes their user name. This may be followed by someone taking over the old name.
In both cases, RPs would like some way to tell the difference between two different users who present the same ID at different points in time.
OpenID 2.0
Most people have probably seen or used OpenID. If you have used it, then it has most likely that it was with the 1.x protocol. Now that OpenID 2.0 is close to release (apparently they really mean it this time ...), it is worth looking at the new features it enables. A few that have stood out to me include:
- proper extension support
- support for larger requests/responses
- directed identity
- attribute exchange extension
- support for a new naming monopoly
I'll now discuss each of these in a bit more detail
Canonical Shop Open
The new Canonical Shop was opened recently which allows you to buy anything from Ubuntu tshirts and DVDs up to a 24/7 support contract for your server.
One thing to note is that this is the first site using our new Launchpad single sign-on infrastructure. We will be rolling this out to other sites in time, which should give a better user experience to the existing shared authentication system currently in place for the wikis.
Tag: Rhythmbox
Sansa Fuze
On my way back from Canada a few weeks ago, I picked up a SanDisk Sansa Fuze media player. Overall, I like it. It supports Vorbis and FLAC audio out of the box, has a decent amount of on board storage (8GB) and can be expanded with a MicroSDHC card. It does use a proprietary dock connector for data transfer and charging, but that's about all I don't like about it. The choice of accessories for this connector is underwhelming, so a standard mini-USB connector would have been preferable since I wouldn't need as many cables.
Tag: Sansa
Sansa Fuze
On my way back from Canada a few weeks ago, I picked up a SanDisk Sansa Fuze media player. Overall, I like it. It supports Vorbis and FLAC audio out of the box, has a decent amount of on board storage (8GB) and can be expanded with a MicroSDHC card. It does use a proprietary dock connector for data transfer and charging, but that's about all I don't like about it. The choice of accessories for this connector is underwhelming, so a standard mini-USB connector would have been preferable since I wouldn't need as many cables.
Tag: Sprint
In Montreal
I'm in Montreal through to the end of next week. The sub-zero temperatures are quite a change from Perth, where it got up to 39°C on the day I left.
The last time I was here was for Ubuntu Below Zero, so it is interesting seeing the same city covered in snow.
Comments:
Tester -
I hope you enjoy our nice, cool, snow covered city!
Zeroconf Branch Sharing with Bazaar
When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.
Tag: Pygtk
PyGTK
PyGTK is a set of bindings for the GTK widget set. It provides an object oriented interface that is slightly higher level than the C one. It automatically does all the type casting and reference counting that you would have to do normally with the C API.
For more information on PyGTK, see its home page at www.pygtk.org.
GUADEC 2002: PyGTK
At GUADEC 2002 in Seville I gave a talk about the state of the Python bindings for GTK and GNOME. At this point, I was recommending people move off the old GTK 1.2 bindings, so this talk covered the process of porting existing applications.
GUADEC 2001: PyGTK
At GUADEC 2001 in Copenhagen, I gave a talk about the work I’d been doing on PyGTK. In particular, it talked about the major rewrite to build on top of ExtensionClass (a precursor of Python 2’s new style classes), and the start of GTK 2.0 support.
GUADEC 2000: Dia and PyGTK
At GUADEC 2000 in Paris, I gave talks about the Dia diagram editor, and my Python bindings for GTK and GNOME.
Tag: Lca2009
In Hobart
Today was the first day of the mini-conferences that lead up to linux.conf.au later on this week. I arrived yesterday after an eventful flight from Perth.
I was originally meant to fly out to Melbourne on the red eye leaving on Friday at 11:35pm, but just before I checked in they announced that the flight had been delayed until 4:00am the following day. As I hadn't had a chance to check in, I was able to get a pair of taxi vouchers to get home and back. I only got about 2 hours of sleep though, as they said they would turn off the baggage processing system at 3am. When I got back to the airport, I could see all the people who had stayed at the terminal spread out with airplane blankets. A little before the 4:00am deadline, another announcement was made saying the plane would now be leaving at 5:00am. Apparently they had needed to fly a replacement component in from over east to fix a problem found during maintenance. Still, it seems it wasn't the most delayed Qantas flight for that weekend and it did arrive in one piece.
Tag: Twisted
Using Twisted Deferred objects with gio
The gio library provides both synchronous and asynchronous interfaces for performing IO. Unfortunately, the two APIs require quite different programming styles, making it difficult to convert code written to the simpler synchronous API to the asynchronous one.
For C programs this is unavoidable, but for Python we should be able to do better. And if you're doing asynchronous event driven code in Python, it makes sense to look at Twisted. In particular, Twisted's Deferred objects can be quite helpful.
Tag: Storm
Django support landed in Storm
Since my last article on integrating Storm with Django, I've merged my changes to Storm's trunk. This missed the 0.13 release, so you'll need to use Bazaar to get the latest trunk or wait for 0.14.
The focus since the last post was to get Storm to cooperate with Django's built in ORM. One of the reasons people use Django is the existing components that can be used to build a site. This ranges from the included user management and administration code to full web shop implementations. So even if you plan to use Storm for your Django application, your application will most likely use Django's ORM for some things.
Transaction Management in Django
In my previous post about Django, I mentioned that I found the transaction handling strategy in Django to be a bit surprising.
Like most object relational mappers, it caches information retrieved from the database, since you don't want to be constantly issuing SELECT queries for every attribute access. However, it defaults to commiting after saving changes to each object. So a single web request might end up issuing many transactions:
Change object 1 | Transaction 1 |
Change object 2 | Transaction 2 |
Change object 3 | Transaction 3 |
Change object 4 | Transaction 4 |
Change object 5 | Transaction 5 |
Unless no one else is accessing the database, there is a chance that other users could modify objects that the ORM has cached over the transaction boundaries. This also makes it difficult to test your application in any meaningful way, since it is hard to predict what changes will occur at those points. Django does provide a few ways to provide better transactional behaviour.
Storm 0.13
Yesterday, Thomas rolled the 0.13 release of Storm, which can be downloaded from Launchpad. Storm is the object relational mapper for Python used by Launchpad and Landscape, so it is capable of supporting quite large scale applications. It is seven months since the last release, so there is a lot of improvements. Here are a few simple statistics:
0.12 | 0.13 | Change | |
---|---|---|---|
Tarball size (KB) | 117 | 155 | 38 |
Mainline revisions | 213 | 262 | 49 |
Revisions in ancestry | 552 | 875 | 323 |
So it is a fairly significant update by any of these metrics. Among the new features are:
Using Storm with Django
I've been playing around with Django a bit for work recently, which has been interesting to see what choices they've made differently to Zope 3. There were a few things that surprised me:
- The ORM and database layer defaults to autocommit mode rather than using transactions. This seems like an odd choice given that all the major free databases support transactions these days. While autocommit might work fine when a web application is under light use, it is a recipe for problems at higher loads. By using transactions that last for the duration of the request, the testing you do is more likely to help with the high load situations.
- While there is a middleware class to enable request-duration transactions, it only covers the database connection. There is no global transaction manager to coordinate multiple DB connections or other resources.
- The ORM appears to only support a single connection for a request. While this is the most common case and should be easy to code with, allowing an application to expand past this limit seems prudent.
- The tutorial promotes schema generation from Python models, which I feel is the wrong choice for any application that is likely to evolve over time (i.e. pretty much every application). I've written about this previously and believe that migration based schema management is a more workable solution.
- It poorly reinvents thread local storage in a few places. This isn't too surprising for things that existed prior to Python 2.4, and probably isn't a problem for its default mode of operation.
Other than these things I've noticed so far, it looks like a nice framework.
Two‐Phase Commit in Python's DB‐API
Marc uploaded a new revision of the Python DB-API 2.0 Specification yesterday that documents the new two phase commit extension that I helped develop on the db-sig mailing list.
My interest in this started from the desire to support two phase commit in Storm – without that feature there are far fewer occasions where its ability to talk to multiple databases can be put to use. As I was doing some work on psycopg2 for Launchpad, I initially put together a PostgreSQL specific patch, which was (rightly) rejected by Federico.
Schema Generation in ORMs
When Storm was released, one of the comments made was that it did not include the ability to generate a database schema from the Python classes used to represent the tables while this feature is available in a number of competing ORMs. The simple reason for this is that we haven't used schema generation in any of our ORM-using projects.
Furthermore I'd argue that schema generation is not really appropriate for long lived projects where the data stored in the database is important. Imagine developing an application along these lines:
Storm Released
This week at the EuroPython conference, Gustavo Niemeyer announced the release of Storm and gave a tutorial on using it.
Storm is a new object relational mapper for Python that was developed for use in some Canonical projects, and we've been working on moving Launchpad over to it. I'll discuss a few of the nice features of the package:
Loose Binding Between Database Connections and Classes
Storm has a much looser binding between database connections and the classes used to represent records in particular tables. The standard way of querying the database uses a store object:
Tag: Zope
Storm 0.13
Yesterday, Thomas rolled the 0.13 release of Storm, which can be downloaded from Launchpad. Storm is the object relational mapper for Python used by Launchpad and Landscape, so it is capable of supporting quite large scale applications. It is seven months since the last release, so there is a lot of improvements. Here are a few simple statistics:
0.12 | 0.13 | Change | |
---|---|---|---|
Tarball size (KB) | 117 | 155 | 38 |
Mainline revisions | 213 | 262 | 49 |
Revisions in ancestry | 552 | 875 | 323 |
So it is a fairly significant update by any of these metrics. Among the new features are:
Using Storm with Django
I've been playing around with Django a bit for work recently, which has been interesting to see what choices they've made differently to Zope 3. There were a few things that surprised me:
- The ORM and database layer defaults to autocommit mode rather than using transactions. This seems like an odd choice given that all the major free databases support transactions these days. While autocommit might work fine when a web application is under light use, it is a recipe for problems at higher loads. By using transactions that last for the duration of the request, the testing you do is more likely to help with the high load situations.
- While there is a middleware class to enable request-duration transactions, it only covers the database connection. There is no global transaction manager to coordinate multiple DB connections or other resources.
- The ORM appears to only support a single connection for a request. While this is the most common case and should be easy to code with, allowing an application to expand past this limit seems prudent.
- The tutorial promotes schema generation from Python models, which I feel is the wrong choice for any application that is likely to evolve over time (i.e. pretty much every application). I've written about this previously and believe that migration based schema management is a more workable solution.
- It poorly reinvents thread local storage in a few places. This isn't too surprising for things that existed prior to Python 2.4, and probably isn't a problem for its default mode of operation.
Other than these things I've noticed so far, it looks like a nice framework.
Tag: PostgreSQL
Psycopg migrated to Bazaar
Last week we moved psycopg from Subversion to Bazaar. I did the migration using Gustavo Niemeyer's svn2bzr tool with a few tweaks to map the old Subversion committer IDs to the email address form conventionally used by Bazaar.
The tool does a good job of following tree copies and create related Bazaar branches. It doesn't have any special handling for stuff in the tags/ directory (it produces new branches, as it does for other tree copies). To get real Bazaar tags, I wrote a simple post-processing script to calculate the heads of all the branches in a tags/ directory and set them as tags in another branch (provided those revisions occur in its ancestry). This worked pretty well except for a few revisions synthesised by a previous cvs2svn migration. As these tags were from pretty old psycopg 1 releases I don't know how much it matters.
Psycopg2 2.0.7 Released
Yesterday Federico released version 2.0.7 of psycopg2 (a Python database adapter for PostgreSQL). I made a fair number of the changes in this release to make it more usable for some of Canonical's applications. The new release should work with the development version of Storm, and shouldn't be too difficult to get everything working with other frameworks.
Some of the improvements include:
- Better selection of exceptions based on the SQLSTATE result field. This causes a number of errors that were reported as ProgrammingError to use a more appropriate exception (e.g. DataError, OperationalError, InternalError). This was the change that broke Storm's test suite as it was checking for ProgrammingError on some queries that were clearly not programming errors.
- Proper error reporting for commit() and rollback(). These methods now use the same error reporting code paths as execute(), so an integrity error on commit() will now raise IntegrityError rather than OperationalError.
- The compile-time switch that controls whether the display_size member of Cursor.description is calculated is now turned off by default. The code was quite expensive and the field is of limited use (and not provided by a number of other database adapters).
- New QueryCanceledError and TransactionRollbackError exceptions. The first is useful for handling queries that are canceled by statement_timeout. The second provides a convenient way to catch serialisation failures and deadlocks: errors that indicate the transaction should be retried.
- Fixes for a few memory leaks and GIL misuses. One of the leaks was in the notice processing code that could be particularly problematic for long-running daemon processes.
- Better test coverage and a driver script to run the entire test suite in one go. The tests should all pass too, provided your database cluster uses unicode (there was a report just before the release of one test failing for a LATIN1 cluster).
If you're using previous versions of psycopg2, I'd highly recommend upgrading to this release.
Two‐Phase Commit in Python's DB‐API
Marc uploaded a new revision of the Python DB-API 2.0 Specification yesterday that documents the new two phase commit extension that I helped develop on the db-sig mailing list.
My interest in this started from the desire to support two phase commit in Storm – without that feature there are far fewer occasions where its ability to talk to multiple databases can be put to use. As I was doing some work on psycopg2 for Launchpad, I initially put together a PostgreSQL specific patch, which was (rightly) rejected by Federico.
Tag: Beer
Honey Bock Results
Since bottling the honey bock last month, I've tried a bottle last week and this week. While it is a very nice beer, the honey flavour is not very noticeable. That said, the second bottle I tried had a slightly stronger honey flavour than the first so it might just need to mature for another month or so.
If I was to do this beer again, it would make sense to use a stronger flavoured honey or just use more honey. Then again, perhaps it isn't worth trying honey flavoured dark beers.
Honey Bock
Yesterday I bottled the honey bock that has been brewing over the last week. This one was made with the following ingredients:
- A Black Rock Bock beer kit.
- 1kg of honey
- 500g of Dextrose
- Caster sugar for carbonation
The only difference from the standard procedure was replacing part of the brewing sugar with honey. Before being added, the honey needs to be pasteurised, which involves heating it up to 80°C and keeping it at that temperature for half an hour or so. This kills off any any wild yeasts or other undesirables that might spoil the brew.
Beer Pouring Machine
One of the novelties in the airport lounge at Narita was a beer pouring machine. It manages to consistently pour a good glass of beer every time. You start by placing the glass in the machine:

When you press the start button, it tilts the glass and pours the beer down the side of the glass:

After filling the glass the machine tilts the glass upright again and some extra foam comes out of the second nozzle:
Chilli Beer
Got around to tasting the latest batch of home-brew beer recently: a
chilli beer. It came out very nicely: very refreshing but with a chilli
aftertaste in the back of your throat. You can definitely taste the
chilli after drinking a pint :)
.
I used a beer kit as a base, since I haven't yet had the patience to do a brew from scratch. The ingredients were:
- A Black Rock Mexican Lager beer kit.
- 1kg of Coopers brewing sugar.
- About 20 red chillis.
- Caster sugar for carbonation.
I took half the chillis and cut off the stems and cut them up roughly (in hind sight, it probably would have been enough to cut them lengthwise). I then covered them with a small amount of water in a pot and pasteurised them in the oven at 80°C for about half an hour. The wort was then prepared as normal, but with the pasteurised chillis added before the yeast.
17 June 2002
Work
Last week, one of the servers died because one of the sticks of memory died. After pulling it out, the system booted fine. It would have been a lot easier to test if I didn't have to open it up to plug a floppy drive in. I now have Memtest86 in the GRUB boot menu. Was pretty easy to set up:
cp memtest.bin /boot grubby --add-kernel="/boot/memtest.bin" --title="Memtest86"
This is the second stick of DDR memory we have had that died; probably due to overheating. As the server has 5 IDE ribbon cables, I might look at getting rounded cables which Jaycar is stocking these days.
12 May 2002
The Call for Papers is out:
http://conf.linux.org.au/pipermail/lca-helpers/2002-May/000109.html
There is also an HTML version on the website, but it doesn't quite match the final version of the CFP (yet).
Beer
Bottled the honey ale today. It will be interesting to see how it tastes in a few weeks. The sweetness was gone, but I could definitely taste the honey still. It should be very nice.
GNOME 2.0
Put out yet another beta of libglade for the GNOME 2.0 beta 5 release which should be comming out this week. I should also make new releases of pygtk and gnome-python as well. I have done a number of improvements to the code generator, so pygtk is a bit more complete. The last gnome-python release no longer compiles with the latest GConf, so it also needs a new release.
5 May 2002
Started another batch of beer yesterday. This time I mixed in a kilogram of honey (replacing some of the sugar), so it will be interesting to see how this turns out. The bubbles coming out of the airlock smell fairly different, so it will hopefully go okay.
Merged some patches from various people into my jhbuild build scripts over the weekend. Thanks to jdahlin, it now has support for getting things from other CVS trees. At the moment, we have rules for thinice2, gstreamer and mrproject using this feature.
Tag: Loom
Looms Rock
While doing a bit of work on Storm, I decided to try out the loom plugin for Bazaar. The loom plugin is designed to help maintain a stack of changes to a base branch (similar to quilt). Some use cases where this sort of tool are useful include:
- Maintaining a long-running diff to a base branch. Distribution packaging is one such example.
- While developing a new feature, the underlying code may require some refactoring. A loom could be used to keep the refactoring separate from the feature work so that it can be merged ahead of the feature.
- For complex features, code reviewers often prefer to changes to be broken down into a sequence of simpler changes. A loom can help maintain the stack of changes in a coherent fashion.
A loom branch helps to manage these different threads in a coherent manner. Each thread in the loom contains all the changes from the threads below it, so the revision graph ends up looking something like this:
Tag: D-Bus
bzr-dbus hacking
When working on my bzr-avahi plugin, Robert asked me about how it should fit in with his bzr-dbus plugin. The two plugins offer complementary features, and could share a fair bit of infrastructure code. Furthermore, by not cooperating, there is a risk that the two plugins could break when both installed together.
Given the dependencies of the two packages, it made more sense to put common infrastructure in bzr-dbus and have bzr-avahi depend on it. That said, bzr-dbus is a bit more difficult to install than bzr-avahi, since it requires installation of a D-Bus service activation file. After looking at the code, it seemed that there was room to simplify how bzr-dbus worked and improve its reliability at the same time.
Tag: Valgrind
Running Valgrind on Python Extensions
As most developers know, Valgrind is an invaluable tool for finding memory leaks. However, when debugging Python programs the pymalloc allocator gets in the way.
There is a Valgrind suppression file distributed with Python that gets rid of most of the false positives, but does not give particularly good diagnostics for memory allocated through pymalloc. To properly analyse leaks, you often need to recompile Python with pymalloc.
As I don't like having to recompile Python I took a look at Valgrind's client API, which provides a way for a program to detect whether it is running under Valgrind. Using the client API I was able to put together a patch that automatically disables pymalloc when appropriate. It can be found attached to bug 2422 in the Python bug tracker.
Tag: Avahi
Zeroconf Branch Sharing with Bazaar
When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.
ZeroConf support for Bazaar
When at conferences and sprints, I often want to see what someone else is working on, or to let other people see what I am working on. Usually we end up pushing up to a shared server and using that as a way to exchange branches. However, this can be quite frustrating when competing for outside bandwidth when at a conference.
It is possible to share the branch from a local web server, but that still means you need to work out the addressing issues.
Avahi on Breezy followup
So after I posted some instructions for setting up Avahi on Breezy, a fair number of people at UBZ did so. For most people this worked fine, but it seems that a few people's systems started spewing a lot of network traffic.
It turns out that the problem was actually caused by the
zeroconf
package
(which I did not suggest installing) rather than Avahi. The zeroconf
package is not needed for service discovery or .local
name lookup, so
if you are at UBZ you should remove the package or suffer the wrath of
Elmo.
Avahi on Breezy
During conferences, it is often useful to be able to connect to connect to other people's machines (e.g. for collaborative editing sessions with Gobby). This is a place where mDNS hostname resolution can come in handy, so you don't need to remember IP addresses.
This is quite easy to set up on Breezy:
- Install the
avahi-daemon
,avahi-utils
andlibnss-mdns
packages from universe. - Restart dbus in order for the new system bus security policies to
take effect with "
sudo invoke-rc.d dbus restart
". - Start
avahi-daemon
with "sudo invoke-rc.d avahi-daemon start
". - Edit
/etc/nsswitch.conf
, and add "mdns
" to the end of the "hosts:
" line.
Now your hostname should be advertised to the local network, and you can
connect to other hosts by name (of the form hostname.local
). You can
also get a list of the currently advertised hosts and services with the
avahi-discover
program.
Tag: Bonjour
Zeroconf Branch Sharing with Bazaar
When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.
Tag: Hackathon
Zeroconf Branch Sharing with Bazaar
When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.
Tag: Zeroconf
Zeroconf Branch Sharing with Bazaar
When collaborating with someone at one of these sprints the usual way to let others look at my work would be to commit the changes so that they could be pulled or merged by others. With legacy version control systems like CVS or Subversion, this would generally result in me uploading all my changes to a server in another country only for them to be downloaded back to the sprint location by others.
Tag: Inkscape
Inkscape Migrated to Launchpad
Yesterday I performed the migration of Inkscape's bugs from SourceForge.net to Launchpad. This was a full import of all their historic bug data – about 6900 bugs.
As the import only had access to the SF user names for bug reporters,
commenters and assignees, it was not possible to link them up to
existing Launchpad users in most cases. This means that duplicate person
objects have been created with email addresses like
$USERNAME@users.sourceforge.net
.
Tag: Openid.ax
OpenID Attribute Exchange
In my previous article on OpenID 2.0, I mentioned the new Attribute Exchange extension. To me this is one of the more interesting benefits of moving to OpenID 2.0, so it deserves a more in depth look.
As mentioned previously, the extension is a way of transferring information about the user between the OpenID provider and relying party.
Why use Attribute Exchange instead of FOAF or Microformats?
Tag: Gnome-Power-Manager
Weird GNOME Power Manager error message
Since upgrading to Ubuntu Gutsy I've occasionally been seeing the following notification from GNOME Power Manager:

I'd usually trigger this error by unplugging the AC adapter and then picking suspend from GPM's left click menu.
My first thought on seeing this was "What's a policy timeout, and why
is it not valid?" followed by "I don't remember setting a policy
timeout". Looking at bug
492132 I found a
pointer to the policy_suppression_timeout
gconf value, whose
description gives a bit more information.
Tag: Japan
Beer Pouring Machine
One of the novelties in the airport lounge at Narita was a beer pouring machine. It manages to consistently pour a good glass of beer every time. You start by placing the glass in the machine:

When you press the start button, it tilts the glass and pours the beer down the side of the glass:

After filling the glass the machine tilts the glass upright again and some extra foam comes out of the second nozzle:
Tag: Narita
Beer Pouring Machine
One of the novelties in the airport lounge at Narita was a beer pouring machine. It manages to consistently pour a good glass of beer every time. You start by placing the glass in the machine:

When you press the start button, it tilts the glass and pours the beer down the side of the glass:

After filling the glass the machine tilts the glass upright again and some extra foam comes out of the second nozzle:
Tag: XML
Stupid Patent Application
I recently received a bug report about the free space calculation in gnome-vfs-obexftp. At the moment, the code exposes a single free space value for the OBEX connection. However, some phones expose multiple volumes via the virtual file system presented via OBEX.
It turns out my own phone does this, which was useful for testing. The
Nokia 6230 can store things on the phone’s memory (named DEV
in the
OBEX capabilities list), or the Multimedia Card (named MMC
). So the
fix would be to show the DEV
free space when browsing folders on DEV
and the MMC
free space when browsing folders on MMC
.
nxml-mode
Started playing with nxml-mode, which makes editing XML much nicer in emacs (psgml-1.3 does an okay job, but the indenter and tag closer sometimes get confused by empty elements). There is a nice article about nxml-mode on xmlhack which gives an introduction to the mode.
The first thing that struck me about nxml in comparison to psgml was the
lack of syntax highlighting. It turned out that the reason for this was
that colours were only specified for the light background case, and I
was using a dark background. After setting the colours appropriately
(customise faces matching the regexp ^nxml-
), I could see that the
highlighting was a lot better than what psgml did.
Atom
Have been playing round with Atom, which looks like a nicer form of RSS. Assuming your content is already in XHTML, it looks a lot easier to generate an Atom file compared to an RSS file, because the content can be embedded directly, rather than needing to be escaped as character data. Similarly, an Atom file is easier to process using standard XML tools compared to RSS because the document only needs to be parsed once to get at the content (which is probably what you were after anyway).
Tag: JHBuild
JHBuild Updates
The progress on JHBuild has continued (although I haven't done much in the last week or so). Frederic Peters of JhAutobuild fame now has a CVS account to maintain the client portion of that project in tree.
Perl Modules (#342638)
One of the other things that Frederic has been working on is support for
building Perl modules (which use a Makefile.PL
instead of a configure
script). His initial patchworked fine for tarballs, but by switching
over to the new generic version control code in jhbuild it was possible
to support Perl modules maintained in any of the supported version
control systems without extra effort.
JHBuild Improvements
I've been doing most JHBuild development in my bzr branch recently. If you have bzr 0.8rc1 installed, you can grab it here:
bzr branch http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.dev
I've been keeping a regular CVS import going at
http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.cvs
using Tailor, so
changes people make to module sets in CVS make there way into the bzr
branch. I've used a small hack so that merges back into CVS get
recorded correctly in the jhbuild.cvs
branch:
Using Tailor to Convert a Gnome CVS Module
In my previous post, I mentioned using Tailor to import jhbuild into a Bazaar-NG branch. In case anyone else is interested in doing the same, here are the steps I used:
1. Install the tools
First create a working directory to perform the import, and set up tailor. I currently use the nightly snapshots of bzr, which did not work with Tailor, so I also grabbed bzr-0.7:
$ wget http://darcs.arstecnica.it/tailor-0.9.20.tar.gz
$ wget http://www.bazaar-ng.org/pkg/bzr-0.7.tar.gz
$ tar xzf tailor-0.9.20.tar.gz
$ tar xzf bzr-0.7.tar.gz
$ ln -s ../bzr-0.7/bzrlib tailor-0.9.20/bzrlib
2. Prepare a local CVS Repository to import from
Revision Control Migration and History Corruption
As most people probably know, the Gnome project is planning a migration
to Subversion. In contrast, I've
decided to move development of jhbuild over to
bzr
. This decision is a bit easier for
me than for other Gnome modules because:
- No need to coordinate with GDP or GTP, since I maintain the docs and there is no translations.
- Outside of the moduleset definitions, the large majority of development and commits are done by me.
- There aren't really any interesting branches other than the mainline.
I plan to leave the Gnome module set definitions in CVS/Subversion though, since many people help in keeping them up to date, so leaving them there has some value.
GraphViz
On the gtk-doc-list
mailing list, Matthias mentioned that the
GraphViz license has been changed to the
CPL (the same license as
used for Eclipse), which is considered Free by both the FSF and OSI
(although still GPL incompatible). This should remove the barriers that
prevented it getting packaged by Linux distributions.
Due to the previous licensing, RMS urged developers of GNU software to
not even produce output in the form that the GraphViz tools use as
input. Maybe that can change now. While the license is GPL incompatible,
the GraphViz tools can easily be invoked from the command line, passing
a .dot
file in, and getting output in PNG, PS, SVG, etc format (or
even another .dot
file with the layout information added), which is
enough for pretty much all uses of the tools.
8 December 2004
Mataró
I've been in Mataró (about an hour from Barcelona) now since Sunday, and it's quite a nice place. It is a bit cooler than Perth due to it being the middle of Winter here, but the way most of the locals are rugged up you'd think it was a lot colder. It's great to catch up with everyone, and a number of pygtk developers will be turning up over the next few days for the BOF on the weekend.
20 October 2004
Even More Icon Theme Stuff
To make it a bit easier to correctly display themed icons, I added
support to GtkImage
, so that it is as easy as calling
gtk_image_new_from_icon_name()
or gtk_image_set_from_icon_name()
.
The patch is attached to bug
#155688.
This code takes care of theme changes so the application developer doesn't need to. Once this is in, it should be trivial to add themed icon support to various other widgets that use GtkImage (such as GtkAbout and GtkToolItem).
4 October 2004
Icon Theme APIs (continued)
Of course, after recommending that people use
gtk_icon_theme_load_icon()
to perform the icon load and scale the icon
for you, Ross manages to find a
bug in that function.
If the icon is not found in the icon theme, but instead in the legacy
$prefix/share/pixmaps
directory, then gtk_icon_theme_load_icon()
will not scale the image down (it will scale them up if necessary
though).
jhbuild
Jhbuild now includes a notification icon when running in the default terminal mode. The code is loosely based on Davyd's patch, but instead uses Zenity's notification icon support. If you have the HEAD branch of Zenity installed, it should display without any further configuration. Some of the icons are a little difficult to tell apart at notification icon sizes, so it would be good to update some of them.
6 September 2004
linux.conf.au
The LCA2004 team have put together the conference CD and DVD. Apparently they will arrive in the mail in about a week.
They put the CD contents on the web first, and I was a bit disappointed that the recording of my talk was missing (it does include my slides though). However, when they put the DVD contents up I saw that it included a video recording of the talk, which is pretty cool.
20 May 2004
Mail Viruses
The barrage of mail viruses and their side effects is getting quite annoying. In the past week, I've had a gnome.org mailing list subscriptions disabled twice. After looking at the mailing list archive, it was pretty obvious why.
The mail server that serves my account is set up to reject windows executables a few other viruses at SMTP delivery time (so it isn't responsible for generating bounces). Unfortunately, a number of viruses got through to the mailing lists and were subsequently rejected before reaching my account. After a certain number of bounces of this type, mailman helpfully disables delivery.
14 April 2004
After the breakin at the gnome.org web server, the CVS server were moved over to the new server HP donated. However, the LXR and Bonsai tools weren't considered as high a priority, so have not been restored yet.
Since it was easier to set up than either LXR or Bonsai, I set up ViewCVS (with jdub's help), so we now have online repository browsing again. It doesn't provide all the features found in the other packages, so it'll be good to get them set up again too though.
jhbuild
Made some changes to the way "jhbuild bootstrap
" works. Whereas
previously bootstrap
would check to see if each required build tool
was installed by the distro and only build the tools that were missing,
it now builds all the tools.
If you wish to use the build tools supplied by your distro, it is now
recommended that you don't run bootstrap
. To perform the "check
that required tools are installed" job that bootstrap
used to do, you
can instead run the "jhbuild sanitycheck
" command, which will do
these checks and report any errors. The sanitycheck
command also
checks for other configuration problems as well, such as whether the all
the different automake versions will be able to find the libtool macros.
17 February 2004
Weather
It has been really hot and humid here for the past few days. While it is not uncommon to have hot weather in Perth, high humidity is quite unusual. It seems to be due to the floods up in the north of the state (they had a report on the news about an 18 person town that had been without a pub for 3 days).
There was a big thunder storm last night, so hopefully things will get back to normal. Unfortunately, it is still quite hot (9:20am at the moment, and its 33°C with 62% relative humidity) and there has been an order preventing people from using air conditioners due to supply problems at the power company.
12 February 2004
jhbuild
Had a pretty good response to the jhbuild changes. There was a number of problems I didn't catch during my testing (more that I would have liked). However, I think I caught the last few ones with pychecker.
I suppose the next thing to do is to help the fd.org guys set things up so they can manage their module sets from their own CVS tree. That will make it easier to recommend as a build tool.
jhbuild
Checked in a fairly big set of modifications to jhbuild, designed to
make it a bit more modular and the code less messy. I had been working
on these changes for a while now, and had been keeping track of them on
the jhbuild-ng
branch.
Here are a few of the main changes:
- Code reorganised into a package
-
The code has been reorganised into a Python package. Unfortunately this means that the old shell script used to start jhbuild won't work. Rerunning "make install" will fix this though. This will make it easier to extend things in the future.
5 November 2003
Mark: the support for building the freedesktop.org X server hasn't been there for a while. It was just added yesterday by Johan Dahlin.
If anyone else is interested in building some of the stuff in freedesktop.org CVS using jhbuild, I wrote some instructions and put them in the wiki.
28 April 2003
Red Hat 9
Installed it on a few boxes, and I like what I see so far. The Bluecurve mouse cursors look really nice. It is also good to see some more of my packages included in the distro (fontilus and pyorbit).
Spam
Some spammer has been sending mail with random @daa.com.au addresses in
the From:
field. So far, I have received lots of double bounces, a few
messages asking if we know about the spam, and many automated responses
(some saying the message came from a blocked domain!). The Received
headers indicate that the mail comes from somewhere else, so there
isn't much I can do. I hate spammers.
5 May 2002
Started another batch of beer yesterday. This time I mixed in a kilogram of honey (replacing some of the sugar), so it will be interesting to see how this turns out. The bubbles coming out of the airlock smell fairly different, so it will hopefully go okay.
Merged some patches from various people into my jhbuild build scripts over the weekend. Thanks to jdahlin, it now has support for getting things from other CVS trees. At the moment, we have rules for thinice2, gstreamer and mrproject using this feature.
Tag: Pyorbit
Linux.conf.au 2004: Scripting with PyORBit
At Linux.conf.au 2004 in Adelaide, I gave a talk about controlling GNOME applications from Python via the accessibility framework.
GUADEC 2003: Libegg and PyORBit
At GUADEC 2003 in Dublin, I gave talks about Libegg and PyORBit.
Tag: Libegg
GUADEC 2003: Libegg and PyORBit
At GUADEC 2003 in Dublin, I gave talks about Libegg and PyORBit.
Linux.conf.au 2003: EggMenu
I gave a talk at Linux.conf.au 2003 about the experimental “EggMenu” framework I had been working on. This code was eventually merged into GTK 2.4 as GtkUIManager.
Tag: Dia
GUADEC 2000: Dia and PyGTK
At GUADEC 2000 in Paris, I gave talks about the Dia diagram editor, and my Python bindings for GTK and GNOME.