Understanding Service personnel satisfaction with their lived experience of the Future Accommodation Model pilot: Technical Report
Published 19 September 2023
Research authors
This research was undertaken by the Ministry of Defence, with support from their advisors Deloitte LLP. The conclusions and findings presented in this report are the sole views of the Ministry of Defence and reflect the information and assumptions gathered during the research in 2021 and are subject to the limitations detailed in the Technical Annex.
This research was completed in January 2022.
List of abbreviations
APF Accommodation Preference Form
CEA Continuity of Education Allowance
DIN Defence Intelligence Notice
DIO Defence Infrastructure Organisation
EDM Expectations Disconfirmation Model
FAD Future Assignment Date
FAM Future Accommodation Model
FHTB Forces Help to Buy
FTRS(FC) Full Time Reserve Service (Full Commitment)
JPA Joint Personnel Administration
JSP Joint Service Publication
LoS Length of Service
LTR Long Term Relationship
LTR(E) Long Term Relationship (Established)
LTR(R) Long Term Relationship (Registered)
MOD Ministry of Defence
MOH Maintain Own Home
MRS Market Research Society
OF Officer
OR Other Rank
PR Preserved Rights
PRS Private Rental Sector
PStat Cat Personnel Status Category
RAF Royal Air Force
RWA Residence at Work Address
SFA Service Family Accommodation
SLA Single Living Accommodation
SP Service Person/Service Personnel
SPR Selected Place of Residence
SSFA Substitute Service Family Accommodation
SSSA Substitute Service Single Accommodation
TP Transitional Protection
UIN Unit Identification Number
HR Unit Human Resources
Definitions and Theoretical Framework
Introduction
This section outlines the definitions in the research programme objective in the context of the research. Further detail is provided on the theoretical framework applied in the research and the three relationships used to measure satisfaction and limitations surrounding the model.
Definitions
The delivery of the overall objective for the research programme objective required defining core elements of the objective i.e.:
-
the accommodation options for Service Personnel (SP)
-
offering more choice
-
increasing satisfaction
-
the lived experience
The accommodation options under the FAM pilot policy were designed to offer SP more choice on accommodation, i.e., expanding the accommodation that SP could consider beyond Service provided accommodation. JSP 464 Tri-Service Accommodation Regulations Volume 4: Future Accommodation Model (FAM) Pilot - UK Part 1: Directive notes the following:
-
all entitled SP serving at FAM pilot sites will be accommodated with a home under the FAM model
-
FAM includes the options of Service Family Accommodation (SFA), Single Living Accommodation (SLA), Private Rental Sector (PRS) or Maintaining own Home (MOH)
-
SP already serving at a pilot site at the point of FAM rollout will not be obliged to take up an alternative accommodation offer mid-assignment unless they choose to do so
-
should SP already serving at a pilot site, who already rent or own their own home, wish to opt-into FAM, they must self-enrol in order to access relevant FAM payments and any associated allowances
-
SP already renting a property within 50 miles of their pilot site at the point of rollout can choose to transfer onto FAM and obtain the Rental Payment applicable to their entitlement
-
all SP meeting the pilot eligibility criteria are entitled to SLA at their duty station or to the MOH option
-
SP meeting the pilot eligibility criteria are able to state a preference for either SFA or PRS via the FAM Accommodation Preference Form (APF)
-
further information on submitting an APF can be found via the “Future Accommodation Model: what you need to know” page on GOV.UK
-
where possible, accommodation will be allocated to meet the SP preference; however, this will be subject to availability and, therefore, either SFA or PRS may be allocated
Satisfaction is subjective and challenging to measure. When measured through the consumption of goods or services, satisfaction may reflect the extent to which conscious desires or expectations are fulfilled about the quality or value of the consumed product or service Mathison (2005).
The approach to measuring satisfaction (and whether FAM was increasing this) was developed through a review of FAM programme documents and discussion with FAM sponsors. This review and discussion determined that overall satisfaction with FAM would be measured by understanding satisfaction across three facets:
-
FAM policy options – the policies on which FAM is based
-
FAM processes – the administrative delivery and support related to FAM
-
FAM accommodation options – the availability of the accommodation options offered through FAM
The consideration of these three facets of FAM contributed to shaping the research programme definition of overall SP satisfaction as ‘satisfaction with the policies, processes and accommodation options provided under the FAM pilot’.
Factors contributing to overall satisfaction with FAM:
-
Satisfaction with FAM Options - FAM policies are transparent and consistent (the MOD have direct and complete influence on policies)
-
Satisfaction with FAM processes - FAM process, which are in the direct and complete control of the MOD should be consistent for all SP across all sites. However, this many vary due to the human factor, as different teams may execute process and service provision differently. It may also vary due to cultural differences across the services
-
Accommodation options may vary across sites due to:
a. legacy SLA and SFA
b. geographical location
c. the impact of local/peripheral housing stock for PRS and MOH options, i.e., accessibility, affordability and quality
d. the impact of local community and social services i.e., accessibility and quality of schooling, care services and leisure activities
e. the MOD/FAM Teams have direct influence (through SLA/SFA options) and indirect influence (through policy on PRS and MOH) on this
The definition of the SP lived experience used in the research assumed that the lived experience of SP is a balance between ‘the offer’ and ‘the ask’.
-
the ‘offer’ is what is provided to SP by the MOD, such as a competitive salary, opportunities to travel, job security, exciting career opportunities
-
the ‘ask’ is what is then required by SP in return, such as having to live differently compared to civilians, sacrificing personal time and the possibility of fighting on the front line
Where the offer outweighs the ask, the SP might experience a positive lived experience and where the ask outweighs the offer, then they might experience a negative lived experience. However, the most important aspect was that the lived experience is different and unique to each SP and their families, and the research aimed to capture this understanding as well. This balance between ‘the offer’ and ‘the ask’ and a positive or negative lived experience is similar to the theoretical framework that was applied in the research design.
Limitations
The following limitations on the definitions were noted:
-
the working definition of SP satisfaction with the FAM lived experience was developed top-down by the FAM programme and FAM stakeholders from the Royal Navy, Army and Royal Air Force. This definition may not be aligned with the research participant definition of satisfaction
-
the challenge in designing an exact measure of satisfaction, as expressed or conscious desires overlap only partially with true needs, which may be more important and possibly unconscious (and not communicated) Mathison (2005);
-
measuring satisfaction with the lived experience remained challenging due to the inability to control the effects of SP experiences beyond FAM and their contribution to FAM, i.e., how could SP fully separate their FAM/accommodation experience
-
measuring satisfaction in the context of the FAM across SP is based on the assumption that FAM is supplying a fulfilment (a need) for SP and that this fulfilment is consistent across SP Oliver (2011)
Theoretical Framework
A theoretical framework was applied and aligned with the definition of the lived experience to explore levels of SP satisfaction. The framework selected was the Expectations Disconfirmation Model (EDM), which aims to measure satisfaction by asking three core questions:
-
what are the expectations [EXP] of the product or service?
-
what is the performance [PER] of the product or service?
-
what is the satisfaction [SAT]?
The EDM measure of satisfaction examines three relationships:
-
relationship one: the direct effect of perceptions of the performance of a product or service on satisfaction, i.e., holding positive perceptions of performance may increase satisfaction
-
relationship two: the direct effect of expectations on satisfaction, i.e., where expectations are used as a baseline to form a judgment about a product or service, and these expectations may then influence satisfaction independently of performance
-
relationship three: disconfirmation, defined as “the result of a comparison between what was expected and what was observed” Oliver (2011), and under disconfirmation
-
high performance is more likely to exceed (lower) expectations, and this may result in positive disconfirmation and lead to higher satisfaction. Therefore, holding lower expectations may lead to positive disconfirmation and increased satisfaction
-
on the other hand, higher expectations are less likely to be exceeded even if performance is high, resulting in negative disconfirmation. Therefore, higher expectations can lead to negative disconfirmation and less satisfaction Grimmelikhuijsen & Porumbescu (2017)
An illustration of the EDM and the relationship between the factors contributing to satisfaction is shown in Table 1.
Table 1: The Expectancy Disconfirmation Model
Expectations [EXP] | Performance [PER] | Disconfirmation | Satisfaction [SAT] |
---|---|---|---|
High | Excellent | Neutral | No change |
High | Moderate | Neutral/Negative | No change/Low |
High | Poor | Negative | Low |
Medium | Excellent | Neutral/Positive | No change/High |
Medium | Moderate | Neutral | No change |
Medium | Poor | Neutral/Negative | No change/Low |
Low | Excellent | Positive | High |
Low | Moderate | Neutral/Positive | No change/High |
Low | Poor | Neutral | No change |
Limitations
The following limitations in the theoretical framework were noted:
-
expectations of a product or service may be unreasonably high or low and are limited by what the research participant thought was possible and what the research participant was exposed to in terms of stimuli and communication, e.g., negative publicity may create low expectations Mathison (2005)
-
desires and expectations vary over time more erratically than true needs, making desires and expectations a less reliable indicator of true quality or value, yet expressed desires and expectations are a large part of satisfaction measures Mathison (2005)
The research acknowledges that other contributors to overall SP satisfaction related to accommodation may also include:
-
experience of the processes associated with FAM
-
the physical condition of the accommodation (outside the scope of FAM)
-
accommodation related payments and refunds (which were determined outside the FAM programme)
FAM research questions
Table 2 below outlines how the FAM outcomes and policy priorities were linked to the programme research questions and how the research questions were collected either in the research sessions or the large audience conversations.
Table 2: FAM outcomes and policy priorities and the research questions
FAM Outcome / policy priority | Research questions | Large audience conversations | Research sessions |
---|---|---|---|
Overall context | What are the expectations of SP with respect to their accommodation? | Yes | Yes |
SP are satisfied with their overall accommodation allocation experience | To what extent are SP satisfied with the administrative process of being allocated accommodation on the FAM pilot? | Yes | Yes |
Entitlement based on need rather than rank or marital status | How has the FAM pilot needs-based accommodation allowances contributed to SP accommodation choices? | Yes | Yes |
SP are provided with an accommodation subsidy based on their need | What has been the decision-making process for SP when choosing their accommodation on the FAM pilot? | No | Yes |
Rental payment allowance | To what extent has a needs-based accommodation allowance contributed to SP satisfaction with the lived experience? | No | Yes |
SP have more choice in WHERE they live | To what extent has the option to choose where they live contributed to SP satisfaction with the lived experience? | Yes | Yes |
SP have more choice in HOW they live | To what extent has the option to choose how they live contributed to SP satisfaction with the lived experience? | Yes | Yes |
SP have more choice in WITH WHOM they live | To what extent has the option to choose with whom they live contributed to SP satisfaction with the lived experience? | Yes | Yes |
SP are able to have greater stability for themselves and their family | To what extent has the FAM policy enabled SP to provide greater stability for themselves and their family if desired? | No | Yes |
SP are able to have greater stability for themselves and their family | To what extent has the ability to provide greater stability for SP and their family contributed to SP satisfaction with the lived experience? | Yes | Yes |
SP are able to remain mobile | To what extent has the FAM policy enabled SP to remain mobile if desired? | No | Yes |
SP are able to remain mobile | To what extent has the ability to remain mobile contributed to SP satisfaction with the lived experience? | No | Yes |
Broadening entitlement to SP (i.e., Long Term Relationships) | To what extent has the entitlement to accommodation for SP in a Long-Term Relationship contributed to their satisfaction with the lived experience? | Yes | Yes |
Financial support for homeownership (MOH Core Payment) | How is the MOH Core Payment being used by SP? | No | Yes |
Financial support for homeownership (MOH Core Payment) | To what extent has the MOH Core Payment contributed to the SP accommodation choice? | No | Yes |
One-year length of service requirement for eligibility | To what extent has the FAM eligibility requirement for a one-year length of service contributed to SP satisfaction with the lived experience? | Yes | Yes |
Distance of accommodation from workplace | How have the distance from workplace boundaries for each accommodation route contributed to SP accommodation choices? | Yes | Yes |
Distance of accommodation from workplace | To what extent has the distance from workplace boundaries for each accommodation route contributed to SP satisfaction with the lived experience? | Yes | Yes |
Au Pairs | How has the policy option to exclude Au Pairs / Nannies from accommodation allocation calculations contributed to SP satisfaction with the lived experience? | No | Yes |
Single parents with visitation rights | How has the policy option to include eligible children of SP with visitation rights in accommodation allocation calculations contributed to SP satisfaction with the lived experience? | No | Yes |
Transitional Protection | What are the reasons for SP choosing Transitional Protection rather than a FAM accommodation route? | No | Yes |
Cohorts | How has SP satisfaction with the lived experience of the FAM policy varied by rank? | No | Yes |
Data collection
Data collection was primarily qualitative and preferred over quantitative data collection as it:
-
allowed for deeper investigation of research participants through qualitative interviewing techniques such as probing and laddering
-
delivered a different approach from quantitative survey-based research, which had a historically low response and low participation rates across the FAM programme
Two user-led qualitative data collection methods were used to deliver the breadth and depth to answer the programme research questions:
-
research sessions, which were semi-structured conversations built around research activities that allowed for probing of individual views and experiences
-
large audience conversations, which were text-based online focus groups that provided insight into and experiences in an anonymous setting
Research sessions
The research sessions were conducted remotely (not face-to-face) using Microsoft Teams, and the selection rationale, outputs, benefits, and limitations of the research sessions are summarised in Table 3.
Table 3: Research sessions data collection option
Selection rationale | The data collection team spoke with participants and were able to observe non-verbal communication during video sessions. Provided context and understanding of individual views offered the ability to be conducted remotely and digitally. |
Benefits | Participants were comfortable speaking in a small group. The moderator had opportunities to probe, build on earlier responses and navigate the data collection guide to ask appropriate follow-up questions |
Format | Delivered through a semi-structured data collection guide. Featured open-ended and closed questions and two research activities |
Data collection | Microsoft Teams with audio, video (optional), screen sharing, and dial-in options |
Moderation | involved one participant and a two-person data collection team comprising a moderator / interviewer and a note-taker / observer |
Duration and fieldwork period | 45-60 minute-long conversations, completed between 09:00 and 21:00 on weekdays. Conducted between 14 September 2021 and 22 October 2021 |
Outputs | Claimed behavioural understanding of aspirations and limitations. Transcribed conversation (text), video and screen capture (images) |
Alternative options – considered and dispatched | Telephone calls - less interactive as screen sharing to complete activities was not possible. Paired depth interviews (dyads) – interviews conducted with two participants, often with peers making similar decisions or with similar experiences – may lead to reduced participant openness or increased agreement across participants. |
Achieved sample | a total of 50 research sessions were conducted, 45 with SP and five with the spouses and partners of SP |
Limitations
The following limitations in the use of the research sessions were noted:
-
contextual immersion through direct observation of participant accommodation environments was not applied in the research programme as data collection was conducted through digital channels, with no data collection taking place on FAM pilot sites due to COVID-19 restrictions, security, and logistical concerns. This limitation placed greater dependence on research participant input and historic MOD research in strengthening the contextual analysis
-
the data collection focused on expressed views from research participants, and while data collection techniques and tools were applied to produce unexpressed (unconscious) and expressed (conscious) views and needs, these views and needs may not have been communicated by all participants and collected consistently
-
research participants required access to mobile phones/laptops to participate, and this technology requirement resulted in some SP withdrawing from the research due to technical challenges, as well as some SP being excluded from participation
-
the initial four-week research fieldwork period was revised and expanded to six weeks to deliver the achieved sample size following recruitment challenges
-
moderating the research sessions was time and resource-heavy as this required significant mental processing load and execution from the data collection team
-
technology-dependent - internet connection and a computer / mobile device required
Large audience conversations
The large audience conversations were conducted using a digital platform provided by Remesh, with research participants logging into a session through a specific and unique link where they were able to self-identify and participate. Two large audience sessions were designed and scheduled, one for SP on FAM and the other for SP who were not on FAM.
The selection rationale, outputs, benefits, and limitations of the large audience conversations are summarised in Table 4.
Table 4: Large audience design option information
Selection rationale | Text-based conversations with a live audience of participants. Offered the ability to be conducted remotely and digitally |
Benefits | Anonymous participation. More observers attended, up to six from the data collection teams. No delay between research fieldwork and data collection – data was immediately captured on the data collection platform and available for initial analysis. participants simultaneously responded to a question but did not see other participant responses until they had all completed typing in their responses; Therefore, participants were less likely to be influenced by other participants’ responses. |
Format | Delivered through a semi-structured data collection guide. Both open-ended and closed questions were asked. |
Data collection | Using the Remesh platform (Remesh platform), which is an online interactive data collection platform |
Moderation | Involved multiple participants, a moderating team of two researchers, and four additional observers |
Duration | Were 60 minutes long, completed between 14:00 and 15:00 |
Fieldwork | Conducted on 4 October 2021 and 5 October 2021 |
Outputs | Claimed attitudinal and behavioural understanding of aspirations and limitations and typed in text responses (text) |
Alternative options – considered and dispatched | Surveys – excluded due to historically low response rates. In-person focus groups – difficult to deliver any face-to-face research under COVID-19 guidelines and bulletin boards – lengthy fieldwork period required, moderation and incentivisation |
Achieved Sample | Two sessions were conducted with 19 |
Limitations
The following limitations in the use of the large audience conversations were noted:
-
participants self-identify and are anonymous
-
participants can only type in responses, and these responses must be typed in a specific time, e.g., 30 seconds of typing
-
participants can drop out during the sessions without providing notice
-
the conversations were technology-dependent, requiring an internet connection and a computer / mobile device, and the individual participant experience was unmonitored; therefore, participant understanding of the questions was not confirmed
Sampling
This section details the sampling approach applied throughout the study, presenting both the intended and the achieved outcomes. Sampling was planned for the research sessions and was not planned for the large audience conversations as these were anonymous. Comparisons are made between both samples, and the under / over-representation of particular populations are noted.
Planned sample
The planned sample frame was based on data obtained from the Joint Personnel Administration (JPA) systems in August 2021 of the population of SP at the three FAM pilot sites (HMNB Clyde, Aldershot Garrison and RAF Wittering) and the spouses and partners of SP at these pilot sites – see Table 5.
The number of participants in the sample frame was designed to deliver a broad sample of SP that had a mix of officers (OF) and other ranks (OR), SP with family/partners and those without, and SP living in different accommodation types.
Table 5: Population at FAM pilot sites and target number of sessions
Group | Aldershot Officers | Aldershot Other Ranks | HMNB Clyde Officers | HMNB Clyde Other Ranks | Wittering Officers | Wittering Other Ranks | Totals |
---|---|---|---|---|---|---|---|
Population | 579 | 3004 | 555 | 3386 | 160 | 790 | 8474 |
Population % | 7 | 35 | 7 | 40 | 2 | 9 | 100 |
Target number of sessions | 6 | 10 | 6 | 11 | 6 | 7 | 46 |
Target number of sessions % | 13 | 22 | 13 | 24 | 13 | 15 | 100 |
This JPA extract (August 2021) provided more detailed SP profiling information, and in addition to the FAM pilot site, the following sample criteria were agreed upon:
-
the lower age limit was set at 18 years, and the upper age limit was unrestricted
-
birth Sex (male/female) was unrestricted
A stratified non-probability sampling approach, where not all members of the population will have an equal chance of participating, was applied for the research sessions, and this was stratified by:
-
FAM pilot site (Aldershot Garrison, HMNB Clyde (Faslane), RAF Wittering)
-
Rank: Officers and Other ranks
-
FAM status: on FAM or not on FAM broken down by:
-
SP who had taken a FAM option and were either eligible or ineligible to take a FAM option (SP could now be ineligible due to their assignment order being less than 6 months – an eligibility requirement for FAM), i.e., on FAM
-
SP who had not taken a FAM option – and were eligible to take one or who had previously been eligible and were now ineligible, i.e., not on FAM
-
SP (and their spouses and partners) who had never been eligible and had not taken a FAM option were excluded from the research sample because they had no experience of FAM or had not been offered that choice to consider FAM options.
- Accommodation type:
-
Single Living Accommodation (SLA)
-
Service Families Accommodation (SFA)
-
Private Rental Sector (PRS) on FAM / renting privately but not on FAM
-
Maintain Own Home (MOH) on FAM / own their home but not on FAM
Table 6 summarises the planned sample by FAM status and accommodation type across the pilot sites and by SP rank.
Table 6: Sample by FAM status and accommodation type
FAM status and accommodation | Measure | Aldershot Officers | Aldershot Other Ranks | HMNB Clyde Officers | HMNB Clyde Other Ranks | Wittering Officers | Wittering Other Ranks | Totals |
---|---|---|---|---|---|---|---|---|
On FAM and in SLA / SFA / PRS | Population | 109 | 375 | 38 | 164 | 22 | 88 | 796 |
On FAM and in SLA / SFA / PRS | Target sessions | 1 | 3 | 1 | 2 | 1 | 1 | 9 |
On FAM and MOH option | Population | 29 | 103 | 79 | 550 | 13 | 70 | 844 |
On FAM and MOH option | Target sessions | 1 | 2 | 1 | 4 | 1 | 2 | 11 |
On FAM in SFA and Transitional Protection | Population | Unknown | Unknown | Unknown | Unknown | Unknown | Unknown | Unknown |
On FAM in SFA and Transitional Protection | Target sessions | 1 | 1 | 1 | 1 | 1 | 1 | 6 |
Not on FAM and in SLA / SFA / renting privately / own their home | Population | 362 | 1994 | 306 | 1850 | 86 | 553 | 5151 |
Not on FAM and in SLA / SFA / renting privately / own their home | Target sessions | 3 | 4 | 3 | 4 | 3 | 3 | 20 |
TOTAL targeted sessions | TOTAL | 6 | 10 | 6 | 11 | 6 | 7 | 46 |
Table 6 notes:
-
Transitional Protection is the policy whereby those SP who experience a reduction in accommodation entitlement under FAM are protected from any sudden changes in the accommodation offer, where this protection preserves the existing level of entitlement for the duration of the FAM pilot, but will be reviewed at the end of the pilot and is therefore subject to change beyond the pilot period
-
the population of SP on Transitional Protection was not available in the JPA data; Transitional Protection status was directly obtained from SP during the recruitment process.
The following sampling considerations were applied:
-
the sample size was chosen to ensure that data was collected across the Services, from different cohorts at the FAM pilot sites, and to be deliverable within the research project timeframe
-
the number of research sessions proposed in the sample was proportional to the SP population at the pilot sites, i.e., the higher the SP population, the higher the number of sessions
-
SP population cohorts of 20 or fewer SP were excluded as the risks of keeping participants’ anonymity (SP becoming identifiable) were increased, as was the ability to recruit from this small population within the research period successfully
-
the number of research sessions were initially capped at 12 per column of OF or OR in Table 6, recognising that more than 12 sessions may result in research saturation - the point when no new information is discovered by additional data collection.
Limitations
The sampling approach aimed to collect responses from a broad group of SP, and was not going to be representative of the population, as is often the case with qualitative research; specifically:
-
the research was not representative of the SP population at the FAM pilot sites or reflective of the entire Armed Forces
-
the accuracy of the information in JPA may have affected the planned sampling approach; the sampling approach was dependent on this JPA information
Achieved sample
The achieved sample delivered a breadth of SP views from across the pilot sites, rank, and FAM status. However, the achieved sample featured over and under representation in some profiling groups. Information on this is presented in Table 7 and in the notes below.
Table 7: Summary profile of achieved sample: large audience conversations – SP only
Group | Accommodation Type | Aldershot Officers | Aldershot Other Ranks | HMNB Clyde Officers | HMNB Clyde Other Ranks | Wittering Officers | Wittering Other Ranks | Totals |
---|---|---|---|---|---|---|---|---|
On FAM | SFA | 1 | 0 | 1 | 0 | 1 | 1 | 4 |
On FAM | SFA and TP | 2 | 0 | 2 | 0 | 2 | 0 | 6 |
On FAM | MOH | 3 | 3 | 1 | 1 | 2 | 3 | 13 |
On FAM | PRS | 2 | 0 | 0 | 1 | 1 | 1 | 5 |
On FAM | TOTALS | 8 | 3 | 4 | 2 | 6 | 5 | 28 |
Not on FAM | SLA | 1 | 1 | 2 | 0 | 1 | 1 | 6 |
Not on FAM | SFA | 3 | 0 | 1 | 1 | 0 | 0 | 5 |
Not on FAM | MOH | 2 | 0 | 0 | 1 | 2 | 1 | 6 |
Not on FAM | TOTALS | 6 | 1 | 3 | 2 | 3 | 2 | 17 |
Gender | Female | 4 | 0 | 1 | 0 | 1 | 2 | 8 |
Gender | Male | 10 | 4 | 7 | 3 | 8 | 5 | 37 |
Gender | TOTALS | 14 | 4 | 8 | 3 | 9 | 7 | 45 |
Table 7 note: SFA and TP refers to research participants who opted into TP and were in SFA.
Table 8: Summary profile of achieved sample: large audience conversations – SP only
Group | Accommodation Type | Aldershot Officers | Aldershot Other Ranks | HMNB Clyde Officers | HMNB Clyde Other Ranks | Wittering Officers | Wittering Other Ranks | Totals |
---|---|---|---|---|---|---|---|---|
On FAM | SFA | 4 | 1 | 0 | 0 | 1 | 2 | 8 |
On FAM | SFA and TP | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
On FAM | MOH | 1 | 1 | 1 | 0 | 0 | 1 | 4 |
On FAM | PRS | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
On FAM | TOTALS | 6 | 2 | 1 | 0 | 1 | 3 | 13 |
Not on FAM | SLA | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Not on FAM | SFA | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
Not on FAM | MOH | 0 | 0 | 0 | 0 | 2 | 1 | 5 |
Not on FAM | TOTALS | 2 | 0 | 0 | 0 | 2 | 2 | 6 |
Gender | Female | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
Gender | Male | 7 | 2 | 1 | 0 | 2 | 5 | 17 |
Gender | TOTALS | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Table 8 note: One participant from Aldershot did not provide their gender.
The following observations were made about the achieved sample:
- FAM pilot site:
-
participants from Aldershot (n=28) were aligned with the population (41% sample versus 42% population)
-
HMNB Clyde (n=13) were under-represented (19% sample versus 47% population)
-
Wittering (n=28) was over-represented (40% sample versus 11% population)
- Rank
-
Officers (n=45) were over-represented (65% sample versus 15% population); and
-
Other ranks (n=24) were under-represented (35% sample versus 85% population).
- FAM status
-
SP on FAM (n= 45) represent 65% sample versus 20% population
-
SP Not on FAM (n=24) represent 35% sample versus 80% population
- Gender
-
SP Males (n=54) comprise 77% of the sample versus 93% population
-
SP Females (n=10) comprise 14% of the sample versus 7% population
-
spouses and partners of SP (n=5) who were all female comprise 8% of the sample
Limitations
Over representations in some profiling groups were greater than planned, such as the officer rank accounting for almost two-thirds of the sample despite only making up less than one-fifth of the population, and this could contribute towards:
-
potentially biased viewpoints as different ranks may experience different pressures to make the programme appear a particular way, i.e., successful or unsuccessful
-
skewed results which do not accurately represent the views of the generalised research sample.
Research recruitment and participation
This section details the research recruitment approach, including factors that caused additional consideration. The methods of obtaining consent and summaries of participation levels are also outlined. Participants consented to data collection that included video and voice recordings, as well as agreeing to the methods of anonymisation the research was proposing.
Research recruitment
The research sample was stratified with the variables listed below, which were determined from the population data; however, when selecting participants for the research sessions, the approach sought to ensure there was a range of different participant characteristics considered, e.g., ensuring there was a range of Service branches / trades represented:
-
length of service
-
assignment type/ type of commission
-
branch arm group
-
main trade
-
gender
-
Personal status category (PStat Cat) status
-
working contexts, e.g., always on-site, a mix of site work and deployed
-
spouses and partners of SP, i.e. those in Long Term Relationship (Established) LTR(E)s / married / civil partnership, were also approached for participation
Participation and consent
Participation in the research programme was voluntary, and participants completed and signed a consent form that was returned electronically in advance of their research session. The consent form explicitly required participants to ‘opt in’ to each element of the data collection and to confirm their understanding that the data would be analysed in aggregate and released into the public domain. The data collection consent included consent for:
-
video recordings
-
voice recordings
-
note-taking
-
use of anonymised quotes to include four descriptors - rank group (officer or other rank), Service, FAM site location, accommodation type and FAM status
The research noted the challenges of promoting voluntary participation and that SP, especially other ranks, may feel pressured to participate if their commanders had given them time off to do so. To mitigate against this, all participants were informed in the consent form and reminded before the data collection that:
-
participation was voluntary, and anything shared would be anonymised and reported in aggregate;
-
participants would be able to provide information in a way that reflected their views, including ‘don’t know’ or ‘prefer not to say’;
-
participants could withdraw from the research at any point during the data collection, and it would not be communicated back to their Chain of Command (CoC);
-
participants had the option to withdraw their input for the research sessions up to two weeks after data collection. Only the data collection team would be aware of participants who had withdrawn; and
-
re-contact with participants after data collection would only be carried out if permission was obtained during initial data collection - unless for quality control purposes.
The recruitment approach for the research programme is outlined below:
-
participation was voluntary and advertised via the CoC, FAM Cells at the pilot sites and using the details of SP who had previously expressed an interest in taking part in FAM related research; CoC was asked to encourage and support participation and the FAM Cells and base HIVES (welfare and support units) also directly supported recruitment and scheduling of research participants
-
spouses and partners of SP were recruited through their serving partners, FAM Cells and Family Federation’s and HIVES (service supplying information and welfare referral for the Armed Forces Community)
-
spouses and partners provided their name, email address and Service Number of the related SP, which was used to access the profiling variables in JPA and to confirm eligibility and profiling
-
interested SP were asked to email a mailbox to register their interest, providing their name and Service number
-
following an expression of interest from SP and screening using JPA data, potential participants were emailed information on the research programme and consent forms for review and signing. The consent forms were signed and returned by the participants in advance of the research session, and this allowed participants time to withdraw if they later changed their minds
-
participants were free to withdraw at any point up to the data collection session, and participants also had the option for their input to be withdrawn from the data set containing individual and pseudonymised inputs within two weeks from that data collection session
-
data collected from the large audience conversations were fully anonymised, and participant withdrawal was not possible
-
after individual pseudonymised inputs were thematically analysed and aggregated, individual inputs would not be identifiable, and participants were not able to withdraw their input from the aggregated results and report
-
consent was obtained from participants in both the large audience conversations and research sessions
-
it cannot be determined whether a participant of a research session also attended one of the large audience conversations
-
participation incentives were not offered
Table 9 provides a summary of the recruitment of the sessions of the 50 completed sessions.
Table 9: Summary recruitment and participation profile for the research sessions
Recruitment | Service personnel | Spouses and partners of SP | Total |
---|---|---|---|
Expressed Interest | 242 | 26 | 268 |
Ineligible to take part [ineligible for FAM] | 47 | 18 | 65 |
Eligible to take part | 195 | 8 | 203 |
Scheduled sessions | 101 | 6 | 107 |
Did not attend/cancelled the session | 49 | 0 | 49 |
Withdrew from the session | 7 | 1 | 8 |
Attended a research session | 45 | 5 | 50 |
For the large audience conversations, all interested and eligible SP from the research sessions were invited to participate:
-
the anonymity offered in the large audience conversations raised the risk that participation would not represent the population at the pilot sites, as these conversations were live, i.e., once the conversations started, new participants were not able to join, but existing participants were able to leave
-
a considered and robust recruitment programme was identified as the best mitigation to deliver broad representation.
Table 10 provides a summary of the recruitment of the large audience conversation with 19 participants.
Table 10: Summary profile of the large audience conversations recruitment
Recruitment | SP on FAM | SP not on FAM | TOTAL |
---|---|---|---|
Expressed interest and invited | 85 | 65 | 150 |
Participated in the sessions | 12 | 7 | 19 |
Limitations
The following limitation in the recruitment and participation was noted:
- sample recruitment was based on participant availability (availability bias) and professional research judgement that was shaped by a screening questionnaire and sample selection
Analysis
This section details the qualitative research data analysis approach, the data blending that was applied, and an overview of the analysis groups. This is followed by explanations of the analysis activities, including how the unstructured text and Likert data was analysed, how the activity data was collected from the research sessions and its application to the EDM, and a summary of the EDM output.
Data analysis approach
The analysis approach to the research was qualitative content / thematic analysis and grounded theory aided by machine learning. Table 11 provides a high-level summary of the research data collected and the approach used in its analysis.
Table 11: Data types, sources, descriptions, and analysis approaches
Data Type | Data source | Description | Analysis approach |
---|---|---|---|
Unstructured Text / participant contributions | Research sessions and large audience conversations | Transcribed conversations or typed in participant contributions | Machine learning text analysis (content / thematic analysis) |
Unstructured Text / participant contributions | Research sessions | Research activity two (process and important moments when moving into new accommodation) | Machine learning text analysis (content / thematic analysis). |
Activities data and the EDM (structured and unstructured text) | Research sessions | Activities one and three (expectations versus experience regarding aspects of accommodation choice) | Analysis that incorporates the EDM to provide a visual understanding of the potential primary influences of satisfaction and dissatisfaction. |
Likert Scales | Research sessions | Agreement or disagreement questions | Assigning scale scores using agreed integral values to individual Likert items, initial analysis using Top 2 Box (T2B) and Bottom 2 Box (B2B), and then differences in mean scores across analysis groups may provide initial insight |
Likert Scales | Large audience conversations | Agreement or disagreement questions | Assigning scale scores using agreed integral values to individual Likert items, initial analysis using Top 2 Box (T2B) and Bottom 2 Box (B2B), and then differences in mean scores across analysis groups may provide initial insight |
Limitations
The following analysis limitations was noted:
- the use of grounded theory is a labour intensive approach, and a window for researcher-induced bias as theory is developed from the ground up, meaning preconceived ideas can present themselves in the way the analysis is conducted.
Data blending
Collected data was blended for analysis; blending is a form of data triangulation that can help deliver a more accurate understanding of the research and reduce biases inherent in reliance on one data source. Table 12 summarises the rationale and application of the data blending approach.
Table 12: Data blending summary
Data blending | Considerations |
---|---|
Definition | Data blending is the process of combining research data to develop a final data set for analysis. Data blending is a form of data triangulation, whereby data that are studying the same phenomenon are collected at separate times or from different sources and a purposive and systematic selection and integration of populations and data is employed. |
Rationale | Data triangulation is an approach based on the logic that researchers can move closer to obtaining a ‘more accurate’ understanding if they take multiple measurements, use multiple methods, or examine a phenomenon at multiple levels of analysis. A separate analysis of the large audience sessions was discounted due to the lower-than-expected achieved sample; the achieved sample across the two sessions was 19. |
Description | The data collected from the large audience sessions were used to deepen or challenge the understanding of the themes and insights from the research sessions. |
Alignment across the data | Similar core research audiences (phenomenon), i.e., SP, accommodation choices, and FAM. Similar data collection periods for both research sessions and large audience conversations, i.e., September to October 2021. Similar research questions were asked across both data collection exercises, a systematic approach to blending these questions was applied. The data collection team was consistent across both data collection exercises. |
Limitations
The following data blending limitations were noted:
-
the anonymity of the research participants in the large audience conversations resulted in a limited view on whether the profiling information they had provided was accurate or complete, whereas the research participants in research sessions were known participants whose profile information could be verified against existing MOD data
-
the research sessions delivered longer, unstructured, and more detailed research participant contributions, whereas the large audience conversations delivered more concise research participant contributions (capped at 30 - 45 seconds of typing) with limited follow-up and moderator probing
-
questions may have varied slightly across both data collection approaches due to moderator adjustments and delivery as the questions were spoken for research sessions versus typed in for the large audience conversations.
Analysis groups
Considered analysis groups were based on achieved sample sizes of five or greater and allowed for reporting a range of views – see Table 13.
Table 13: Analysis groups description and rationale
Analysis group | Description | Rationale – potential differences |
---|---|---|
FAM site | Different pilot site locations with most participants affiliated to a specific Service. | Culture of each Service; the surrounding areas within the 50-mile radius of the site; and the availability and standard of accommodation on the site. |
Rank | Officers (OF) versus Other Ranks (OR). | Backgrounds and roles of OR’s and OF’s; eligibility and previous accommodation entitlements; and professional requirements, e.g., office role versus operational role. |
FAM status | On FAM and not on FAM | Using FAM and those that are not; using or not using FAM through choice or other (eligibility); and experiences of policy and process. |
Accommodation | Service accommodation (SLA or SFA) versus non-service accommodation (MOH or PRS) | the types of accommodation being used; and the reasons for accommodation choices. |
Limitations
The following limitation was noted in the analysis groups:
- particular analysis groups had greater representation in the achieved sample than others, limiting the generalisability and representativeness of the results.
Unstructured text analysis
The analysis of the unstructured text involved the application of content / thematic analysis and grounded theory using machine learning which conducted the following processes:
-
sentiment analysis, where a positive/neutral/negative sentiment was assigned to the original unstructured text input
-
data preparation through two processes to create the modified/prepared data
-
lemmatizing the original unstructured text input and sorting this text by grouping inflected or variant forms of the same word, ensuring a consistent review of the text, e.g., satisfied, satisfaction, satisfy
-
removing StopWords, which are common English language words that do not add considerable meaning to sentences and can safely be ignored without sacrificing the meaning of the sentence
-
coding the unstructured text, where the modified/prepared data was parsed and specific word mentions in the unstructured text, e.g., satisfied, were counted and matched to a specific text entry from a research participant (coded)
-
topic modelling, where machine learning identified patterns of word usage and clustered those words into topics (themes), which helped organize and offer insights in understanding the unstructured text
Topic themes were then shaped, and these themes formed the core reporting output with a focus on:
-
overall theme
-
differences across analysis groups
-
tensions / alignment with other themes / research questions
-
considerations
The output from this process was the original unstructured text input appended with:
-
sentiment analysis – positive / neutral / negative
-
topic models – will require researcher input in assigning a topic theme name
-
coded mentions of popular words, e.g., allowance, distance
Limitations
The following limitation was noted in the analysis of the unstructured text:
- the absence of a shared contextual understanding that could have been achieved through contextual immersion may have weakened the common baseline understanding – consistency in the sessions and descriptions between researcher and participant and across the data collection - required when conducting content and thematic analysis
Likert data analysis
Likert data collected through closed questions were analysed at a summary level, and the proposed analysis approach involved:
-
assigning scale scores using agreed integral values, e.g., agree response on a five-point scale would be given a score of five
-
initial analysis using Top 2 Box (T2B) and Bottom 2 Box (B2B) scores that combined the highest two responses, the lowest two responses and recognised any neutral views
-
computing arithmetic mean scores and the standard deviation for the individual Likert items from the response scale scores were discounted due to the overall low sample size across the different Likert items
-
the proposed outputs will focus on summarising Top 2 Box (T2B) and Bottom 2 Box (B2B) scores and provide a steer on whether a measure is viewed positively (T2B) or negatively (B2B)
Limitations
The following limitations were noted in the analysis of the Likert data:
-
the data presents difficulty in measuring the scope of responses as distances between the scales on the continuum are equidistant, which is likely not the case as multiple responses are grouped
-
the data is limited in its presentation of the true attitude of respondents and only numerically summarises responses
Activity data and the EDM
Research participants were presented with a research activity that collected the participants’ expectations and experiences related to FAM and accommodation, where participants were asked to place attributes or outcomes on a grid.
-
the axis on the bottom of the grid measured the participants’ view (expectations) on what they thought FAM or their accommodation choice might do for them (and their family if applicable). This was scored as high, medium, or low
-
the axis on the left-hand side measured the participants view on what FAM or their accommodation choice had actually done (performance), and this was scored as excellent, moderate, or poor
-
the combination of the choices provided nine boxes
Participants were provided with a series of cards with the following attributes or outcomes, as well as blank cards to write in additional attributes or outcomes:
-
accommodation allowance based on needs
-
rental payment allowance
-
ability to choose where I live (the geographical location of accommodation)
-
ability to choose how I live
-
ability to choose whom I live with
-
long-term relationship entitlement
-
one-year (12 months) service eligibility requirement
-
distance from workplace boundaries
-
having a manageable commute between my accommodation and the workplace
-
au pairs / nannies
-
single parents with visitation rights
-
ability to have greater stability for me (and my family if relevant)
-
ability to remain mobile
Participants then selected relevant attributes or outcomes and placed those on the grid, providing their view on:
-
what they expected
-
how well this was delivered / the reality of their experience
The analysis considered the relationships between the attributes or outcomes related to FAM and accommodation in the research activities overall satisfaction to provide a view on disconfirmation and possible attributes for satisfaction or dissatisfaction. Table 14 shows an example of the expectation disconfirmation grading.
Table 14: Example of the expectation disconfirmation grading
Expectation | Experience (Performance) | Overall Satisfaction | Disconfirmation |
---|---|---|---|
Low | Excellent | Satisfied | Positive |
High | Excellent | Satisfied | Neutral / Negative |
Medium | Poor | Satisfied | Negative |
Limitations
The following limitation was noted in the analysis of the activities data and the EDM:
- despite being able to group particular attributes onto a grid for comparison, the inability of all research participants to fully explain decisions or elaborate on reasonings presents a gap in understanding regarding the activities, and this limitation resulted in the data being collected, analysed but not presented in the main report
Quality Assurance (QA)
This section details the quality assurance (QA) processes applied to the development of this research report and provides an overview of the general QA processes and the data analysis and reporting QA processes.
General QA
QA was implemented as part of the research project management and involved:
-
the selection of a research project team that had the experience, qualifications, and tools to conduct and deliver the research project, and this research project team comprised a research delivery team and a research oversight team
-
the research delivery and research oversight team members contributed to the research design and the data collection
-
the research delivery team led on the data analysis and reporting
-
the research oversight team reviewed and approved the analysis and reporting
-
outlining roles and responsibilities on quality and governance, and applying quality and governance protocols drafted in a project-specific quality and governance plan that required
-
all collected participant data were assigned a unique identifier noting the data source (Remesh or research session) and the profiling information that participants agreed to share for reporting, i.e., rank, pilot site, FAM status, and accommodation. This profiling information was appended to the data at all analysis stages and in all draft reports
-
any manual tasks related to data entry, data transfer or data analysis were completed by more than one individual (shared tasking), and then those shared tasks were peer-reviewed to ensure accuracy and quality of completion
-
all machine learning processes and outputs were also reviewed manually and were made available for further quality checks
-
version control was applied to all project documentation and outputs, with previous versions archived and made accessible for quality checks
-
regular communication and weekly quality review sessions between the research delivery team and the research oversight team were conducted
Data analysis QA
This section provides an overview of the QA activities completed at the interim and final data analysis stages.
At the interim data analysis stage – the stage where all the data had been collected and transcribed and the first processing of this full data set was conducted:
-
interim data were analysed for emerging themes by the research delivery team, and these themes were discussed in a research delivery team analysis working session
-
interim data were used to design and develop the Alteryx code (workflows) for conducting the thematic analysis design, i.e., the formulas and processes in the Alteryx platform that processed the machine learning tasks
-
the research delivery team completed their interim data analysis and used these interim findings to develop an analysis plan that was shared with the research oversight team
-
a working session with both research teams was held to discuss interim findings and the analysis plan, with both teams reviewing the interim findings against the research understanding developed during data collection
-
the analysis plan outlined how the data would be synthesised and how the data would be reported
At the final data analysis stage:
-
the final data was processed using the same process applied during the interim analysis stage, and the research delivery team applied the general QA processes, including shared tasking and peer-review
-
analysis groups in the final data were determined on question responses of five or more, and this allowed for the analysis and reporting of a wide range of views
-
the final data was checked against the relevant question and participant responses before being processed through the Alteryx machine learning; this process created an Alteryx output that had:
-
the relevant question
-
participant profiling information
-
original (unprocessed) responses
-
Alteryx processed responses
-
allowing for a review and quality check of the Alteryx processed outputs and any scoring against the original (unprocessed) data
-
the Alteryx machine learning analysis processed eight topics from each analysed question, and each of the eight topics had responses that were scored between 0 and 100, with a higher score indicating closeness or a relationship in the responses, i.e., if three participants used the same words in their responses, their responses would be grouped with a high score
-
topics with three or more responses and scores of 90 or higher were prioritised and analysed, and the research delivery team gave these topics (themes) names
-
topics with three or more responses and scores between 70 and 90 were reviewed for analysis, and topic (theme) names were given, if appropriate
-
topics with three or more responses and scores lower than 70 were briefly reviewed and then disregarded as weak topics not meriting further analysis
-
the review of the Alteryx machine learning analysis was a shared task and peer-reviewed by the research delivery team, who also manually checked the Alteryx machine learning analysis outputs, and inaccuracies were amended and highlighted
-
the final analysed data was transferred into the agreed reporting structure for reporting
Reporting QA
The final analysed data (the findings) were transferred into the reporting structure (draft reports) that underwent four drafting and review cycles.
The first draft report aligned the report structure with the findings and contained most of the report structure and most of the findings. The findings in this draft report held clear sourcing information, i.e., data was linked back to the final data, and any quotes featured the unique participant identifier and the agreed profiling information, allowing for QA checks across the draft report and final analysed data. this first draft was peer-reviewed by the research delivery team and the research oversight team. Feedback from the two teams was used to draft report version two.
Draft report version two featured more complete report sections and further accuracy checks on the findings. This second draft was peer-reviewed by the research delivery team and the research oversight team. Feedback from the two teams was used to draft report version three.
The third draft report held the complete report structure and full findings and focused on the reporting language and style, and further data checks were conducted. This third draft was peer-reviewed by the research delivery team and the research oversight team. Feedback from the two teams was used to draft report version four.
Draft report four was focused on the writing style and clarity, and further final data checks were conducted. This fourth draft was peer-reviewed by the research delivery team and the research oversight team. Feedback from the two teams was used to draft the final report. The final report does not list the data sourcing and participant profiling information; this information sits in earlier draft versions to allow for quality checks if required.
References
Mathison S (2005), ‘Encyclopaedia of Evaluation’, SAGE
Oliver R, (2011) Customer Satisfaction Research, in The Handbook of Marketing Research, Edited by: Grover, Rajiv and Vriens, Marco. SAGE Publications, Inc. City: Thousand Oaks
Grimmelikhuijsen S and Porumbescu G (2017), ‘Reconsidering the expectancy disconfirmation model. Three experimental replications’, Public Management Review, Volume 19 Issue 9, pages 1272 to 1292