By Shujun Li

Last week I attended this year’s International Conference on Image Processing (ICIP 2011) in Brussels, Belgium. ICIP is not a computing style conference, but an EE (electronic engineering) one. Since many image processing problems can be solved effectively by computing methods, ICIP is also attended by many computer scientists and mathematicians. As one of the flagship conferences of the IEEE Signal Processing Society, it was usually attended by more than one thousand people from different fields. This year they got more than 800 papers accepted.

The main goal of mine is of course to present a paper which we got accepted there, whose title is “Recovering Missing Coefficients in DCT-Transformed Images.” Our paper was allocated to an afternoon poster session and it was well attended. We had two co-authors (the first two co-authors, including me and a German colleague from my former institute — University of Konstanz) presenting our poster. While officially we only needed to present our poster for 1.5 hours (the first half of that poster session), we ended up with standing there for more than three hours because there were always interested people around. As its name suggests, our work is about a general framework of recovering missing coefficients from DCT-transformed images, and its application to JPEG images and MPEG videos are straightforward. The basic idea is to model the recovery problem as a linear program and then solve it in polynomial time. The recovery results are surprisingly good even when a lot of DCT coefficients are missing. For instance, if the 15 most significant DCT coefficients are missing from each 8×8 block, some images can still be recovered with an acceptable quality allowing people to see all the semantic contents of the image (see images below). If you want to know more, you can take a look at our poster available online at Our work has the potential to find applications in several sub-fields of multimedia coding including image compression, multimedia security and forensics. We are currently investigating these possibilities.

For such a big conference with around 1000 presentations and multiple sessions, you always have problems deciding which sessions to go. I was more with poster sessions and a few oral sessions on multimedia security and forensics. A particularly interesting session I attended is the “Best Student Paper Award Session”, where eight finalists were presented in front of the audience and the award committee. One of the papers I was interested is about a technique countering JPEG anti-forensics. There are three things we need to explain a bit. First, JPEG forensics is about detecting the fact that a given image was ever JPEG compressed in the past. This can be done by simply looking at the histogram of the DCT coefficients, which has a lot of gaps between some peaks reflecting the quantization step in the JPEG encoder. Second, JPEG anti-forensics is to manipulate a JPEG compressed image in such a way that the footprint of JPEG compression is removed and the simple forensic tool fails. One simple approach is to add a noise-like signal so that the DCT coefficients of the manipulated JPEG image look like an uncompressed image. And last, a new technique is proposed to detect JPEG compression even when the anti-forensic manipulation is present. The technique actually tries to estimate the quantization factor of the original JPEG image by checking a re-compressed image with different quality factors. This paper finally got one of the three best student paper awards.

It is interesting to see now we have both forensics and anti-forensics. Most forensics research does not have anti-forensics in mind. But I do think that forensic tools should consider anti-forensics from the very beginning because according to Kerckhoffs’ principle (or Shannon’s maxim if you like) manipulators of multimedia data should have full knowledge of forensic tools that will run on the manipulated media. Ideally, a forensic tool should try to handle all known anti-forensic algorithms. Of course, such full-functional forensics is a very challenging task, so we should expect to continue seeing the cat and mouse game between forensics and anti-forensics in the next a few years.

Posted in general | 1 Comment

Kendi Muchungi, PhD student
Department of Computing

19th August 2011 saw the first day of the 3 day Green Man festival held at the Brecon Beacons in the South of Wales. In the vein of most festivals there was an array of spectacular musical performers of the likes of Tim Minchin and Laura Marling among many others.

At the heart of the festival, in the Einstein’s Garden was a group of Surrey PhD Psychology and Computing research students:  Kendi Muchungi (Computing), Christopher Hope and Andrew Pringle (Psychology), headed by Dr. Matthew Casey from the Department of Computing. These research students were dressed in Victorian getup, on a circus looking stage set up in late 19th century style and were performing perception illusions in a stall named “Cirque de Perception”.

The idea behind “Cirque de Perception” was to introduce festival goers to some of the science behind unusual perception illusions in a fun and simple way so as to capture their imagination and interest. “Cirque de Perception” had four main acts:

The Rubber Hand Illusion: this illusion required a willing participant, whose left hand would then be obscured from their view behind a screen, a rubber hand, positioned in such a way as to mimic their actual left hand and finally, the synchronous stroking of the same finger on the rubber hand and that of the obscured hand over a period of time. Because we derive a sense of self from multiple senses, vision, touch and proprioception, the manipulation of these senses may cause some restructuring in our brain (neuroplasticity) that results in a temporary transfer of our sense of self to the visible rubber hand.

This act actually stole the show in Einstein’s Garden and we had willing punters every single day of the festival. The climax of the show always came when the person stroking the hands would suddenly hit the rubber hand. We had most participants scream or yelp as it took them a few moments to realize that the rubber hand was not their own.

The Stroop Effect: we had four boards with colour names painted in colours that were different to their name. This effect simply showed the interference that takes place in our brains when we are required to shout out the colour of the ink, rather than read the word. This shows that our training to read is so pervasive, that we read automatically even when we don’t need to.

The McGurk Effect: for this performance we had two of the PhD Research students participate, with one hidden from sight of the audience and the other in full view. The student in full view would mouth a phoneme, say ‘ga’, and the one out of sight would voice, say ‘ba’, and we would then have the audience split up in groups depending on what they thought they heard. Most of the people in the audience tended to hear a phoneme that was neither voiced nor mouthed – ‘da’ – hence the McGurk Effect.

Ventriloquism: for this act, the ability of the ventriloquist Kieran Powell was tested with three Dodo puppets. Kieran would remain on stage while we moved the Dodos around ‘singing’, Old McDonald had a farm, at different locations. This performance was successful because even though the audience knew that Kieran was the ventriloquist they always seemed to shift their attention to the Dodo showing that we modify our perception of sound location depending upon what we see.

All in all, “Cirque de Perception” was one of the major attractions at the festival, even though the only music to be heard at this stall was Old McDonald.  It even made it into the New Scientist Blog.

Einstien’s Garden is managed and curated by Ellen Dowell, and sponsored by the Wellcome Trust as part of the Science at Play project.

Posted in general, PhD- a day in the life | Leave a comment

Just back from a workshop on Requirements Engineering for Electronic Voting Systems, which took place in Trento earlier this week. I was presenting a paper about our findings from some focus groups we ran on our prototype secure voting system `Pret a Voter’. In a nutshell: people were fine using the system to vote, but were less keen on doing security checks. We’re currently revisiting the best way of doing those checks.

The workshop covered a broad spectrum of topics. One talk described a formal logic for specifying pure forms of voting requirements. There was a panel discussion on verifiability. The keynote speaker talked about experiences and issues in international election observation. There was a clever ballot stuffing attack on a postal voting system. One talk was about a system in Germany that has to manage an election with over 500 candidates, and voters cast 71 votes each! They are finding that this is becoming too hard to do on paper (the paper ballot form is about 1m wide) and looking for technology to help them out. It’s not as easy as it looks. Another talk was about a `traffic lights protocol’ to tell users when they have filled out their ballot form correctly, with red, orange and green for the various stages they could be at.

It’s always stimulating to hear interesting talks, and to have some intensive discussions with other researchers in the field. What I also got out of the workshop was a better appreciation of the relationship between the theory and the practical issues. I’ve gained more insight into what’s needed on a practical level to make our system suitable to run an election, and this will feed back into our research.

Posted in general | Leave a comment