By Alan Woodward

Recently an old colleague, Dr Andrew Rogoyski, came to lecture to our MSc students on how government deals with cyber security. Dr Rogoyski has studied the interactions between government and industry and his talk led to a key question for which there was a surprising range of views. The question? When and how should government get involved in cyber security?

The UK has the most Internet-centric economy in the G20 group of industrialised nations according to research by the Boston Consulting Group released in March 2012. It estimates that the UK’s internet economy was worth £121bn in 2010, more than £2,000 per person. Couple this with the knowledge that approximately 20 threats per second are discovered on the Internet, and it’s not surprising that UK government lists cyber security as a “Tier 1 Threat”, alongside terrorism. However, recognising the threat is slightly different from actually doing something about it.

Governments now recognise that there is a strong economic advantage in having a secure digital infrastructure. In order to attract businesses to your economy increasingly you need to demonstrate that your country is a safe place to conduct Internet-based business. Booz-Allen reports on this aspect of a countries with its cyber hub index.

Interestingly the UK and the US are seen as the safest places for Internet based business. This has resulted in several large corporations quietly reversing the recent trend to relocate business to the developing world to reduce costs. Ensuring security has become as important, if not more important, a business driver for governments as cost. When a country loses its AAA credit score for a ratings agency, it makes headlines. I predict it will not be long before similar importance is attached to measures such as the Booz Allen cyber hub index.

But in order to ensure a safe environment, where does government responsibility end and business responsibility begin? In November 2011, the UK government hosted the first intra-governmental conference on the cyber threat, at which time they issued a revised cyber-security strategy. As well as discussing the usual topics of the threat from cybercrime, espionage and warfare, the conference saw the debate begin at governmental level as to where responsibility lies for protecting key assets on the Internet. When the national interest is threatened, responsibility for protection lies primarily with the state, but many governments are powerless in the case of the cyber threat, for a variety of reasons.


A significant difficulty in protecting critical national assets is that the Internet is primarily run by private companies or non-governmental organisations. That’s true even in the case of  critical national infrastructure such as utilities, which are vulnerable to attack via the Internet. Most of the infrastructure and services that underpin national digital infrastructures are run by private companies such as HP, Fujitsu, IBM, Verizon, BT and others. Even the key technologies employed to sit on top of the infrastructure are developed by private companies ranging from Google, to Microsoft, to Apple plus a raft of much smaller start-ups, some of whom you will never have heard. The level of investment produced by these companies dwarfs those made by governments.

For example, the UK’s National Cyber Security Programme is making available a total of GBP650 million (USD1.01 billion) over four years. This money is intended to be part of a programme whereby government works with businesses, as well as protecting governmental assets. But this money is lost when you think, for example, of cyber security company Symantec spending USD862 million in 2011 alone on research and development. Similarly, Microsoft spent USD8.7 billion in 2010 and Google USD3.7 billion. The disparity between individual government spend, and that they are used to procuring systems over many years rather than at the speed at which Internet technologies change, means that governments find it very difficult to engage with private businesses.

So what have governments done in response to this situation? Well, they have acted in remarkably different ways.

For example, you might imagine the all-out attack on Estonia in 2007 would have led to an aggressive response. Instead it led to the formation of the Co-operative Cyber Defence Centre of Excellence (CCD COE). The purpose of CCD COE is to understand the cyber threat as it develops and thence to prevent those attacks. This is an approach which has received the full backing of NATO. Meanwhile, the EU has created the European Network and Information Security Agency (ENISA) to act as a hub for the exchange of information, best practices and knowledge in the field of information security.

Other governments have adopted a more militaristic approach. In May 2010, the United States Cyber Command, part of the US Strategic Command, became operational. Cyber Command is not just there for the operations and defence of specified Department of Defense information networks but also to carry out “full spectrum military cyberspace operations”. Similarly, Israeli Prime Minister Binyamin Netanyahu announced in May 2011 that the country would set up a cyber-defence task force to defend Israel’s vital infrastructure from cyber-attacks.

Regardless of style of approach one common theme has emerged: the key to effective defence against the rapidly evolving threat is shared intelligence. The studies conducted by Dr Rogoyski showed that what business wants most from government is Information Sharing and Awareness Raising. And, intelligence is one thing that governments do have.

They are now looking for ways of sharing sensitive information, that they might otherwise be unhappy to share as it might reveal the source of the information, with those who are directly affected by it. In the US in 2011, the Department of Defense launched a new pilot programme, the Defense Industrial Base Cyber-Pilot, in which it shares classified threat intelligence with around 20 defence contractors or their commercial internet service providers. Although the initial scope of Defense Industrial Cyber-Pilot was to help protect government network, it doesn’t take a great leap of imagination to see how this can become a two way process, especially in areas such as power, transportation and energy. The success of this scheme resulted in it being extended in September 2011 to include more private organisation. It has, however, highlighted in the public consciousness that the military are involved in protecting the Internet, and the debate continues as to whether it should be the Department for Homeland Security of the DoD that has such a responsibility. Either way, the positive aspect is that it is happening.


In the UK, the private sector is not necessarily waiting for government direction. For example, a financial services virtual task force has been formed by several large banks. This task force co-operates with the Metropolitan Police and exchanges information on threats and attacks as rapidly as possible. This has proved to be a very effective approach and has led to a number of successful prosecutions. Another information exchange is being set up by Intellect and ADS, UK hi-tech trade associations.

The emergence in 2011 of the infamous Stuxnet virus has highlighted how vulnerable critical national infrastructure is, and this has given a jolt to all those thinking about Internet security from a governmental perspective. Even if it were just a commercial issue, cyber security (and certainly the perception of it) can dramatically affect a nation’s fortune in the modern world. The fact that someone can potentially turn off the water, lights and stop the trains makes people think quite differently about what is a “stable” country, and will certainly influence anyone trying to decide whether to base their business in a country.

However, it is clear that unlike many historical threats to national wellbeing, this threat can only be checked by the closest collaboration possible between government and business. Business must be focussed on ensuring that this happens, and government must be more willing to share what it knows than it has been previously.

With news only this week that the Duqu virus (evil son of Stuxnet) has been found in the wild in a new variant, we can see that the threats are becoming more advanced and more persistent, and perhaps most worry of all, more targeted. Governments and business have a relatively small window in time to put in place the necessary mechanisms to share information such that it can be acted upon quickly enough to prevent damage. For those countries that don’t do this, they will rapidly realise that whilst in the past people “voted with their feet”, these days people “vote with their mouse” and it takes a lot less time lose trust in the Internet age than ever it did before.

Posted in general | Leave a comment

Every year in the month of March, the Computing Department puts together a PhD Conference in which the works of its PhD students are celebrated through presentations and posters. The event acts as a training ground where the Department’s Postgraduate Research Students (PGRs) can test drive presenting their contributions to computer science, giving participating students a feel for external conferences. This year’s event, the 9th Conference, was littered with outstanding moments, the most prominent one being the overwhelming support and attendance by the Computing Department staff and PGRs: a fact that was noted and appreciated by the Vice Chancellor, Professor Sir Christopher Snowden, who gave the opening address. His address was followed by an amazing motivational speech by Dr Alastair MacWilson, Global Managing Director of Accenture Technology Consulting, who emphasized on the importance of seizing every opportunity available and encouraged all attendees to be more than the sum of their skills set; to be flexible, responsible, trustworthy and always be willing to take up opportunities as a progression of their dreams.

9th Annual Computing Department PhD Conference, University of Surrey

A second motivational speech was given by Professor Dave Robertson, the Head of School of Informatics at the University of Edinburgh, who enthused the crowd by offering a highly captivating overview of current research trends in computer science and concluded his talk by encouraging our research community not to shy away from the option of being self employed, as a vehicle for trail blazing new trends and schools of thought with regards to computing. This philosophy seemed to complement Professor Chris France’s foreword for the Conference’s programme.

The 9th Annual Computing Department PhD Conference culminated with the giving of prizes and below is the list of categories and winners.

Best Paper

Mr Panagiotis Ioannou, for his paper ‘Effect of Spiking Network Parameters on Polychronization’. He received an Amazon gift voucher for £60, sponsored by BCS and awarded by Dr Roger Peel.

Best Paper Presentation (1)

Mr Wissam Albukhanajer, for his presentation of the paper ‘Image Identification Using Evolutionary Trace Transform for Copyright Protection’. He received an Amazon gift voucher for £40, sponsored by the Computing Department.

Best Paper Presentation (2)

Miss Kendi Muchungi, for her presentation of the paper ‘Computation Simulation of Light Adaptation Incorporating Rod-Cone Coupling’.  She received a Kindle, provided by IBM UK’s, Mr Steve Legg.

Best Paper Review

Mr Matthew Karlsen, who received a £20 Amazon gift voucher sponsored by the Computing Department.

Best Poster

Mrs Areej Alfraih, for her poster entitled ‘Chromatic Aberration Estimation for Image Splicing Detection’.  She received an Amazon gift voucher for £40, sponsored by BCS and awarded by Dr Roger Peel.

Best Research Potential

Mr Brian Gardner, for his poster entitled ‘Neurocomputational Model of Foraging Behaviour based on Reinforcement Learning’.  He received an Amazon gift voucher for £20, sponsored by the Computing Department.

As is the case with any event, its realisation is only as good as its facilitation and for this event, a debt of gratitude is owed to Mr Nick Ryman-Tubb, who ensured proceedings run smoothly and on time. A natural outcome was therefore that the event was a resounding success, not in the least because of the overwhelming show of support from both industry and academia.

Sponsors: Intellas UK, BCS, IBM, Detica, Memset, Thoughtified

Organising Committee: Dr Lilian Tang, Mrs Maggie Burton, Miss Anna Vartapetiance (PhD Rep), Mr Kostas Eftaxias (PhD Rep), Miss Tameera Rahman (PhD Rep), Mr Aasis Vinayak (PhD Rep), Miss Kendi Muchungi (PhD Rep), Mr Christopher Smith, Mr Spencer Thomas

Academic Reviewers: Dr Matthew Casey, Dr Andre Gruning, Prof Yaochu Jin, Dr Shujun Li, Dr Mark Manulis, Dr Sotiris Moschoyannis, Dr Lilian Tang, Dr Helen Treharne (all University of Surrey)

Judges: Prof Steve Schneider (University of Surrey), Prof Dave Robertson (University of Edinburgh), Mr Steve Legg (IBM UK), Dr John Baxter (University of Surrey), Dr Dawn Duke (University of Surrey),

Photographer/Videographer: Mr Ghulam Qadir

Attendance and Encouragement: Prof Sir Christopher Snowden (Vice Chancellor, University of Surrey), Prof Chris France, Associate Dean of Postgraduate Research Students, Faculty of Engineering and Physical Sciences, Prof Jonathan Seville (Dean, Faculty of Engineering and Physical Sciences), Computing Department Staff and PGRs

Posted in general | Leave a comment

by Pablo Gonzalez Alonso
BSc Computer Science 2010

After graduating at Surrey, I had the chance to start my career as a mobile developer (or mobile monkey as I like to call it). More precisely, I started doing Android and iOS applications. It has given me a deeper view of computing on the go.

Since the beginnings of computing, similarly to other engineer fields, we have excelled ourselves on producing smaller, more powerful and efficient products. Whenever we think we are at the summit, we find that there is still a long and exciting way to go up. This shrinking of hardware size has meant that now it’s possible to carry the power of what used to be a mainframe, occupying entire rooms, inside our pockets or backpacks. Something that my grandfather thinks is pure science fiction, even though he has one of them.

There is no doubt that technology has found a great place in our lives. It’s changing the way we live and interact with each other. Recently, Stephen Hawking said: “The Human Species Has Entered a New Stage of Evolution”. He talked about how, at this point in time, information is created and transferred by humans at very high rates. About 50,000 books are published in English every year. Much of this information may not be useful at all. However, as Hawking mentions, this process mimics the way we instinctually have transferred information through natural selection by the means of DNA. Creating useful and not so useful data that is discriminated if needed to disappear.

The jump of computers, and the Internet, to our pockets means that we are continually producing and consuming information. It could be “tweeting” how nice the tomato sauce is at your favourite italian restaurant or reporting on natural disasters. It can also increase people’s creativity and allows ideas (whether useful or not) to be promoted, shared and kept alive.

I truly agree with Hawking’s statement and believe we are very lucky to live the time we do. I sometimes have a thought that makes me rejoice. I look at all the things we are capable of engineer today. Travel back 100 years in my head, and realize that no one at that time has even a mere speculation of what the future is bringing. Coming back to the present I realize that what is coming up in the future is going to be beyond incredible. Often, I try to get my mind to imagine what future technology will be like.

Generally, it could be say that what is coming is going to be incredible, it’s going to change our lives to even more extents and mobile technology is going to have a leading part in this process.

Posted in general | Leave a comment

Reblogged from by Prof Alan Woodward

As someone who is a unusual mixture of physicist, engineer, statistician and computer scientist, I have long known the value of being able to visualise your data. As computing power and data storage capacities have increased there has been a tendency to suffer from data overload. Consequently, being able to dynamically manipulate large data sets and use that data to create visual representations, can lead to insights that would simply not result from poring over the raw data.

Florence Nightingale (yes that Florence Nightingale) was one of the first to use graphical representations to demonstrate publicly the poor conditions being suffered in the Crimea by British soldiers. And, we’ve all seen bar charts, spider diagrams and so on. But such simple tools have long since ceased to enable us to visualise the volumes and types of data that modern science needs to analyse. Enter the Allosphere.

The Allopshere was created back in 2008. However, increasing experience of how to use it, and advances in the supercomputers that do the hard work, has meant that the Allopshere is now enabling analysis of physical phenomenon that are truly remarkable, and rather beautiful to watch.


So, what is the Allosphere? The most obvious feature is the huge sphere within which images can be projected. Not surprisingly it can be in 3D, but most importantly you can immerse yourself within your data, your equations or the images you have taken.

It looks like something out of a science fiction movie, and can accommodate upwards of 30 researchers who can stand together, deep within representations of their data, manipulate it using wireless joysticks, and together consider what the data is telling them:

Of course, none of this would be possible without the computing power that lies, unseen, in its air-conditioned hall. The processing power that has been assembled is really impressive. More impressive still is the way in which that has been combined to produce an “supercomputer”. The key is the algorithms and the software to implement them, without which the supercomputer would be a very expensive heating system. Those at the Allopshere have been developing some, frankly, inspired pieces of software. And, they don’t keep it all to themselves. They regularly contribute to Open Source projects, which I would encourage you to go visit. These include:

Gamma – Genetics Synthesis Library
Cosm – extensions to Max/MSP/Jitter for buioding immersive environments
LuaAV – extension to Lua for tight coupling of computation and disaply of data and sound
CSL – the Create Signal Library for sound generation
Device Server – for linking remote devices like wiimotes, joysticks and a lot more
Stereo – for rendering stereo imagery
GLV- a GUI based toolset for developing interfaces to real-time systems

So, what does all of that add up to? Well, it has now reached the point where you can walk through the nano-scale world and view data representing the multimodal quantum mechaincs at work:

I strongly encourage anyone to listen to Professor JoAnn Kuchera-Morin (Director of the Allopshere) in the TED talk she gave two years ago. I, for one, hope she does another very soon.

Posted in general | Leave a comment

by Professor Alan Woodward

Computer hackers have disrupted the water supply in an area of the US in the latest cyber attack on infrastructure services.

Whilst nations have been concentrating on protecting obvious cyber security targets, such as financial institutions, leaving concerted international action to protect our infrastructure until the lights start going out and the water no longer comes out of the tap will be too late.

Iran and Norway have also recently come under cyber attack. Hackers are becoming more interested in the critical infrastructure of nations around the world.

Whether the motive for these attacks is cybercrime, cyber warfare or activism is almost irrelevant as what it highlights is that the vast majority of the world’s critical national infrastructure is vulnerable.

Posted in general | Leave a comment

The RuleML Initiative brings together delegates from Academia and Industry who have a shared interest in Web rules. This is a wide-ranging initiative, and is a natural forum for the Digital Ecosystems Group’s work on business modelling for the Web using the OMG’s Semantics of Business Vocabulary and Business Rules (SBVR). Last week, Alexandros Marinos travelled to the 2011 RuleML symposium to present his latest work with Prof. Paul Krause and Pagan Gazzard.

This work generated a high level of excitement, with Alexandros’ presentation on and associated demonstration of a syntax directed editor for SBVR winning the RuleML Challenge; the second successive year in which we have done this. It was also featured in the closing talk as one of the seven highlights of the Symposium. The award is voted on by the audience, and although the field was stronger this year, the response from the audience was better too. This fully featured editor solves a usability issue that has been impacting on the uptake of SBVR. But Alexandros stole the show with an “and one last thing” moment reminiscent of the late Steve Jobs – as he was closing his demonstration, he said, “Oh, and you can use this right now, within your browser”. This was immediately followed by a clatter of keystrokes from the audience as they all logged into the website and started building example models in SBVR. Although a simple thing to say, being fully web-based and requiring no installation was technically one of the toughest challenges, and its success is a strong reflection of Pagan’s programming skills.

Key researchers from IBM and Stanford were keen to find out more about the tool, and there were also approaches from Red Hat and Vulcan Inc to explore the possibility of integrating our tool with their work. Talking to Benjamin Grosof from Vulcan was particularly interesting to us, as Vulcan’s SILK is a meeting point for logic programming tools, one of which is Cyc, the latest manifestation of Doug Lenat’s big vision of capturing Large Knowledge in a way that facilitates mechanical inference. Working with SILK means our editor could become an interface for Cyc. Exciting times indeed!

Posted in general | Leave a comment

By Paul Krause

Along with many, my first “real” programming started with reading “K & R”, the book on the C programming language that Dennis Ritchie co-authored with Brian Kernighan. This “quirky but successful” language was the invention of Ritchie and is the foundation upon which most of the currently used programming languages, C++, Java and more recently, C#, Python and Ruby, have been built. In the early days of C (“K & R” was published in 1978 and remains in print), it was closely linked with the Unix operating system that Ritchie co-authored with Ken Thompson. Ritchie’s background in theoretical computer science meant that Unix had a strong theoretical foundation, leading to it becoming the foundation of choice for Steve Job’s revisionary OS X and Linus Torvalds’ Linux. Thus, although his is not a household name, Ritchie’s work is foundational to the world of e-commerce and social computing that we are in today. Dennis Ritchie died at his home in New Jersey on 8th October following a battle with prostate cancer and heart disease. Throughout his life he retained unchanging values of modesty, friendship and collegiality. His was truly a great mind.

Posted in general | Leave a comment

By Shujun Li

Last week I attended this year’s International Conference on Image Processing (ICIP 2011) in Brussels, Belgium. ICIP is not a computing style conference, but an EE (electronic engineering) one. Since many image processing problems can be solved effectively by computing methods, ICIP is also attended by many computer scientists and mathematicians. As one of the flagship conferences of the IEEE Signal Processing Society, it was usually attended by more than one thousand people from different fields. This year they got more than 800 papers accepted.

The main goal of mine is of course to present a paper which we got accepted there, whose title is “Recovering Missing Coefficients in DCT-Transformed Images.” Our paper was allocated to an afternoon poster session and it was well attended. We had two co-authors (the first two co-authors, including me and a German colleague from my former institute — University of Konstanz) presenting our poster. While officially we only needed to present our poster for 1.5 hours (the first half of that poster session), we ended up with standing there for more than three hours because there were always interested people around. As its name suggests, our work is about a general framework of recovering missing coefficients from DCT-transformed images, and its application to JPEG images and MPEG videos are straightforward. The basic idea is to model the recovery problem as a linear program and then solve it in polynomial time. The recovery results are surprisingly good even when a lot of DCT coefficients are missing. For instance, if the 15 most significant DCT coefficients are missing from each 8×8 block, some images can still be recovered with an acceptable quality allowing people to see all the semantic contents of the image (see images below). If you want to know more, you can take a look at our poster available online at Our work has the potential to find applications in several sub-fields of multimedia coding including image compression, multimedia security and forensics. We are currently investigating these possibilities.

For such a big conference with around 1000 presentations and multiple sessions, you always have problems deciding which sessions to go. I was more with poster sessions and a few oral sessions on multimedia security and forensics. A particularly interesting session I attended is the “Best Student Paper Award Session”, where eight finalists were presented in front of the audience and the award committee. One of the papers I was interested is about a technique countering JPEG anti-forensics. There are three things we need to explain a bit. First, JPEG forensics is about detecting the fact that a given image was ever JPEG compressed in the past. This can be done by simply looking at the histogram of the DCT coefficients, which has a lot of gaps between some peaks reflecting the quantization step in the JPEG encoder. Second, JPEG anti-forensics is to manipulate a JPEG compressed image in such a way that the footprint of JPEG compression is removed and the simple forensic tool fails. One simple approach is to add a noise-like signal so that the DCT coefficients of the manipulated JPEG image look like an uncompressed image. And last, a new technique is proposed to detect JPEG compression even when the anti-forensic manipulation is present. The technique actually tries to estimate the quantization factor of the original JPEG image by checking a re-compressed image with different quality factors. This paper finally got one of the three best student paper awards.

It is interesting to see now we have both forensics and anti-forensics. Most forensics research does not have anti-forensics in mind. But I do think that forensic tools should consider anti-forensics from the very beginning because according to Kerckhoffs’ principle (or Shannon’s maxim if you like) manipulators of multimedia data should have full knowledge of forensic tools that will run on the manipulated media. Ideally, a forensic tool should try to handle all known anti-forensic algorithms. Of course, such full-functional forensics is a very challenging task, so we should expect to continue seeing the cat and mouse game between forensics and anti-forensics in the next a few years.

Posted in general | 1 Comment

Kendi Muchungi, PhD student
Department of Computing

19th August 2011 saw the first day of the 3 day Green Man festival held at the Brecon Beacons in the South of Wales. In the vein of most festivals there was an array of spectacular musical performers of the likes of Tim Minchin and Laura Marling among many others.

At the heart of the festival, in the Einstein’s Garden was a group of Surrey PhD Psychology and Computing research students:  Kendi Muchungi (Computing), Christopher Hope and Andrew Pringle (Psychology), headed by Dr. Matthew Casey from the Department of Computing. These research students were dressed in Victorian getup, on a circus looking stage set up in late 19th century style and were performing perception illusions in a stall named “Cirque de Perception”.

The idea behind “Cirque de Perception” was to introduce festival goers to some of the science behind unusual perception illusions in a fun and simple way so as to capture their imagination and interest. “Cirque de Perception” had four main acts:

The Rubber Hand Illusion: this illusion required a willing participant, whose left hand would then be obscured from their view behind a screen, a rubber hand, positioned in such a way as to mimic their actual left hand and finally, the synchronous stroking of the same finger on the rubber hand and that of the obscured hand over a period of time. Because we derive a sense of self from multiple senses, vision, touch and proprioception, the manipulation of these senses may cause some restructuring in our brain (neuroplasticity) that results in a temporary transfer of our sense of self to the visible rubber hand.

This act actually stole the show in Einstein’s Garden and we had willing punters every single day of the festival. The climax of the show always came when the person stroking the hands would suddenly hit the rubber hand. We had most participants scream or yelp as it took them a few moments to realize that the rubber hand was not their own.

The Stroop Effect: we had four boards with colour names painted in colours that were different to their name. This effect simply showed the interference that takes place in our brains when we are required to shout out the colour of the ink, rather than read the word. This shows that our training to read is so pervasive, that we read automatically even when we don’t need to.

The McGurk Effect: for this performance we had two of the PhD Research students participate, with one hidden from sight of the audience and the other in full view. The student in full view would mouth a phoneme, say ‘ga’, and the one out of sight would voice, say ‘ba’, and we would then have the audience split up in groups depending on what they thought they heard. Most of the people in the audience tended to hear a phoneme that was neither voiced nor mouthed – ‘da’ – hence the McGurk Effect.

Ventriloquism: for this act, the ability of the ventriloquist Kieran Powell was tested with three Dodo puppets. Kieran would remain on stage while we moved the Dodos around ‘singing’, Old McDonald had a farm, at different locations. This performance was successful because even though the audience knew that Kieran was the ventriloquist they always seemed to shift their attention to the Dodo showing that we modify our perception of sound location depending upon what we see.

All in all, “Cirque de Perception” was one of the major attractions at the festival, even though the only music to be heard at this stall was Old McDonald.  It even made it into the New Scientist Blog.

Einstien’s Garden is managed and curated by Ellen Dowell, and sponsored by the Wellcome Trust as part of the Science at Play project.

Posted in general, PhD- a day in the life | Leave a comment

Just back from a workshop on Requirements Engineering for Electronic Voting Systems, which took place in Trento earlier this week. I was presenting a paper about our findings from some focus groups we ran on our prototype secure voting system `Pret a Voter’. In a nutshell: people were fine using the system to vote, but were less keen on doing security checks. We’re currently revisiting the best way of doing those checks.

The workshop covered a broad spectrum of topics. One talk described a formal logic for specifying pure forms of voting requirements. There was a panel discussion on verifiability. The keynote speaker talked about experiences and issues in international election observation. There was a clever ballot stuffing attack on a postal voting system. One talk was about a system in Germany that has to manage an election with over 500 candidates, and voters cast 71 votes each! They are finding that this is becoming too hard to do on paper (the paper ballot form is about 1m wide) and looking for technology to help them out. It’s not as easy as it looks. Another talk was about a `traffic lights protocol’ to tell users when they have filled out their ballot form correctly, with red, orange and green for the various stages they could be at.

It’s always stimulating to hear interesting talks, and to have some intensive discussions with other researchers in the field. What I also got out of the workshop was a better appreciation of the relationship between the theory and the practical issues. I’ve gained more insight into what’s needed on a practical level to make our system suitable to run an election, and this will feed back into our research.

Posted in general | Leave a comment