How do you get sensitive people to discuss sensitive things about sensitive matters in an open and frank manner, but in a way that lets you report what was said while respecting the confidence of who said it? The answer is the Chatham House Rule which allows people to speak as individuals, and to express views that may not be those of their organisations, and therefore it encourages free discussion. People usually feel more relaxed if they don’t have to worry about their reputation or the implications if they are publicly quoted.
“When a meeting, or part thereof, is held under the Chatham House rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.”
definition of the Chatham House Rule
Project Abacá is supported by the ITaaU, and is investigating the challenges of reviewing digital record for sensitivity as we described in our previous posts on trust and security and sensitivity judgements. The project recently organised a 2-day workshop under the Chatham House rule, at The National Archives, to which we invited a very wide range of people with an interest in Digital Sensitivity Review. The workshop was attended by participants from several government departments, including (waiving the discretion about their organisation’s participation) the Foreign and Commonwealth Office (FCO), and academic institutions in the wider ITaaU Network.
The workshop had three aims:
- To evaluate the outcomes of the feasibility phase Project Abacá in the operational context of sensitivity review as conducted in a real government departments.
- To achieve mutual knowledge exchange on practical sensitivity review issues between TNA, FCO, Project Abacá team and other ITaaU network members.
- To begin to inform policy making on digital sensitivity review in TNA and FCO during the transition to the 20 year rule, and to begin to inform other departments across government in the same way.
The workshop was a great success with very open and frank discussions of the various challenges that surround the sensitivity review of digital records.
So what did we discover?
The workshop was divided into a number of sessions with different areas of focus. The first morning we focused on bringing all of the attendees up to date with the work of the project, covering many of the issues we have discussed in our previous posts on trust and security and sensitivity judgements.
Later in the first day, all of the workshop attendees had an opportunity to conduct some digital sensitivity reviews for themselves using two different proof-of-concept user interfaces developed by the team. The two different interfaces were set up to log the interactions of the attendees in some detail. By logging and timing interactions with the interfaces we significantly enhanced the information we have about the process of creating individual judgements of sensitivity on digital records. The aim was to establish if there was a preference for an exploratory style of interaction or whether reviewers were prepared to work in a more regimented and strict priority order. The logs will give us quantitative evidence to augment the qualitative feedback on the sensitivity review process we received at the workshop.
We carefully chose to use our existing Abacá test collection as the target for the reviews. This was to enable us to extended the number of records judged by more than one judge in the collection. By close analysis of the judgements and log data we will be able to understand much more fully the issues surrounding inter-judge agreement (or disagreement) – this is a particularly significant area of research that we believe has never been conducted before.
Still later on the first day, when we asked the attendees to reflect on their experience with the tools, we receivedexcellent feedback in terms of inspiration about possible features of records that might indicate sensitivity and also (just as importantly) features that might definitively flag the absence of sensitivity.Discussions continued late into the evening, long after the formal part of the day ended, as we enjoyed the fine weather and hospitality at a local hostelry on Kew Green!
Day two of the workshop was focused on the process and wider strategic context of digital sensitivity review. We shared views from the different organisations involved in the records process – all the way from the government departments who create the records through to eminent historians who use the records for their scholarly research. All of these perspectives had something to say about the challenge of digital sensitivity review and the transition to the use of digital records in government through the 80s, 90s, and 00s. One thing that emerged loud and clear from this part of the workshop was that any tools that we develop for sensitivity review, will also be need to be matched by tools to assist with the selection & appraisal part of the records transfer process, as well as the ultimate interpretation of digital records when presented from an archive.
A number of the attendees remarked how useful the event had been in extending their understanding of the management of the risks inherent in digital sensitivity review. They also remarked that they would be taking this understanding back to their workplaces to inform their emerging practical approaches to dealing with the collections of digital records that will soon need to be transferred to The National Archives.
Finally there was unanimity that much more deep research into the problem of digital sensitivity review is needed. There was also realisation that this will require the investment of significant amounts of time and money to deliver the understanding of the domain and research the transformative tools that are so urgently needed; the first regular transfers of digital public records begins in only a few years’ time.
Honorary Research Fellow
University of Glasgow – School of Computing Science