Voting and ranking exercises

Submitted by Sukaina Bharwani | published 25th Mar 2011 | last updated 30th Mar 2011
Please note: content is older than 5 years

Many formal metrics (e.g., multi-criteria assessment) lead to choices that depend on rather small differences in criteria and aggregate scores. In contrast, stakeholders may favour one or more options for many reasons. Deliberative techniques of collective choice, ranking options and voting on preferred options may be desirable, either in lieu of or as a complement to more formal screening methods.

One example is an iterative ranking technique:

(1) Options are ranked by each person, from 1 (high) to n (low, the number of options)

(2) Options that are not ranked highly are dropped. For instance, the number of 'votes' for rank 1 or 2 could be added up across all of the voters and then options with few 1s or 2s are dropped.

(3) The ranking is re-calibrated, keeping the initial preference but adjusting the rank for the missing options. So if voter A had ranked option X as a 1 but it was dropped, then voter A's option that was previously ranked 2 becomes 1, and so forth.

(4) Steps 2 and 3 are repeated until the option(s) with the highest commitment are preserved.

This technique might lead to a single 'best' choice, or a smaller number of options that can be evaluated in more depth. The 'voting' ensures that only options that at least some stakeholders value highly are chosen. That is, at least someone is likely to champion the implementation of the option.

There are other such exercises, such as pair-wise voting (used in the Saaty algorithm to define weights in multi-criteria assessment).

We can easily produce a spreadsheet to implement this voting technique. First, we might want to canvass the range of such approaches.