Corporate report

Impact of Forensic Science Project Instruction Manual (accessible)

Published 2 August 2022

For guidance to design Forensic Impact Studies using the Impact Point Model and to capture and analyse data to assess forensic effectiveness and timeliness.

1. Introduction

The aim of this suite of documents is to provide a toolkit to facilitate the creation and delivery of a study measuring the impact of forensic science based on the Impact Point Model[footnote 1]. This document provides a step-by-step guide to the Data Capture and Analysis Template (DCAT) Excel template, including technical information, screenshots and links to Excel support (via the official Microsoft website), together with background knowledge developed during the Impact of Forensic Science Project.

1.1 Designing an Impact Point Study

During the study initiation phase, you need to decide the aims and the objectives of your own study in line with the Impact Point Model and to confirm that you are assessing meaningful variables and a sample set that adequately reflects your studies requirements. Begin the process of designing your study by completing the Project Initiation Document (PID) template which is part of this Impact of Forensic Science Toolkit.

The PID template takes you through each step of the study design. It includes guidance (shown in italics) on how to develop the theme and scope of your study as well as which Impact Points (IPs) from the model you might want to study. It also describes capturing and defining a series of Impacting Factors (IFs) that will add detail to your study and will make your study outputs more meaningful.

Work through the PID template prior to any work developing your data capture spreadsheet as the former is vital to inform the latter. The PID template will direct you in relation to the forensic discipline(s) and crime type(s) you will be capturing data about, as well as detailing additional (IF) columns required, and to defining your set, limited response options you will utilise for those parts of the data set.

1.2 The Data Capture and Analysis Template

1.2.1 Use

Once the requirements of the study have been defined and documented in the PID, you can adapt the DCAT provided as the basis for the design of the spreadsheet you need to capture and analyse your data. We have created this template and all subsequent analysis steps using MS Excel due to the ease of access to this tool. If you would like to utilise a different tool, then that is possible; however, it is still important to replicate the data structures used in the DCAT and described in this guidance.

1.2.2 Overview of DCAT Structure

To address different research needs, the DCAT comprises of the following examples, corresponding to two different types of studies:

  • Example 1, which is suitable for simple studies designed to assess the impact of a single forensic discipline and that do not assess any non-forensic impacts.
  • Example 2, which is suitable for more complex studies designed to assess the impact of multiple forensic disciplines (both single and collectively) and addresses the contributions of non-forensic approaches.

Each of the examples contain four tabs:

  • Dataset: A table with 6000 formatted rows. This table is the data source for the reports on the Analysis and Timeliness tabs, so any changes in either the dataset table structure or the values in it will affect those other tabs. The first 22 rows of this table contain example data, to demonstrate the instances of impact, the Impact Points (IPs) used, and how the spreadsheet should be populated. When you start using the template, replace these rows with your original data.
  • Data Validation: Contains the data validation lists and it can remain hidden unless it needs to be edited. Data validation is the process of ensuring your data responses are uniform and comply with restricted predefined entry responses to ensure the quality of the captured data. Data validation rules apply to most fields of the Dataset tab, where users select an option from a drop-down list filled with those predefine response options. The content of the dropdown list comes from a source list which is placed in the Data Validation tab.
  • Analysis: This tab shows the Forensic Effectiveness (FE) metrics (tables and charts), while IFs are added as slicers which provide a way of filtering the table or chart being viewed. This tab also displays summary data of the dataset included (number of cases and impact point instances, date range).
  • Timeliness: This tab shows the Forensic Timeliness (FT) metrics (tables and charts), while IFs are added as slicers. It also displays summary data of the dataset included (number of cases and impact point instances, date range).

The DCAT is an Excel template that you can use as a basis for initiating your own data collection and analysis. The template is customisable to meet your specific needs but at the same time to prevent data validation errors. You can edit the template but must then save it as a separate file.

2. Designing the Data Capture and Analysis

2.1 Structure

The Dataset tab in the DCAT provides space (top yellow coloured cells) for you to input the name of the study, the date range and total number of cases during this range. These values should be agreed and stated in the PID to provide the context of the collected data. You will see a single row to capture each individual instance of potential impact.

The data fields (column headings) found in both Datasets, in row 6, are standard fields which provide core data and are the minimum requirements for an IP study. If you need to add new fields, take time to consider the necessary ones for the analysis and avoid capturing additional, non- essential information that can create extremely large datasets.

Some of the column titles can be adjusted to suit your study, as required. Allow only a single question per field so datasets remain comprehensive and consistent. Short and specific questions lead to clear responses without the need for clarification comments. Free text, complex responses or comments cannot be easily quantifiable, so they cannot be transformed into meaningful metrics. Avoid requiring comments on Excel cells – again these cannot be analysed in any meaningful way. If you notice that responses within a column need further clarifications, consider the following actions: (a) check the wording of the question or (b) add a new field for a supplementary question or (c) add a comments field at the end of your dataset to make notes on your research in one place.

Column headings are coloured purple, blue or orange. Columns with blue data fields are populated automatically according to existing formulae, which are protected from editing.[footnote 2] Columns headed with purple require input by the user. Finally, users should update the orange headed columns with any IFs relevant to their study, as described in the project initiation document.

Each Dataset tab of the DCAT contains the following columns:

2.1.1 Example 1 (Single Discipline / No Non-forensics)

Col. A:     Unique Row Identifier: Used to identify each individual row of data. This is useful to identify row(s), both as a descriptor but also to allow an assessment if a row has been deleted or moved and identify which have been affected. This field should be inputted with a unique (numeric) value in each row. The recommended way to do this is a table row count which begins at ‘1’ and ends with the number of rows of data. You should automatically number rows[footnote 3] rather than use a formula. The latter will re-number all the rows, if are moved or removed, so removed rows or gaps will not be detectable.

Col. B:     (Anonymised) Case Reference: Used to identify the case / crime for the individual row(s) of information captured. Anonymised case references should be utilised when real case references could not be shared for data security reasons. If using anonymised case references, these should be unique to each case and a separate lookup file should be created with a key noting the true case / crime reference to allow cases to be traced in the event of data queries.

Col. C:     Impact Point (IP): Used to select which Impact Point is being captured in each row, using the dropdown list in the relevant field. This is a data validation list[footnote 4], which can be found in the PID and is set using a list on the data validation tab.

Col. D:     Question Posed (QP): Used to describe, for each IP, a figurative ‘question posed’ by the CJS to forensic science. This will assist the user of the data capture when populating other fields in the data capture relating to the IP. This field is automatically populated depending on the IP that has been selected in the relevant field.

Col. E:     Was forensic science commissioned relevant to this impact point: Used to describe if forensic science was commissioned in relation to the IP and QP selected for the row. If it can be said that forensics has been used to attempt to answer question posed for the IP selected, then the answer should be ‘Yes’, if not the answer should be ‘No’. This field restricts answers to either ‘Yes’ or ‘No’ through a dropdown list using data validation. This is to ensure that the data is standardised. Ideally, data relating to every crime for your studied crime type and date period should be captured whether forensic science was used or not.

This will allow you to derive overall impact metrics in the context of the entire number of crimes committed as well as only when forensic science was commissioned and can be informative when considering the potential impact that forensic science might have if commissioned more. If, however, forensic science is only utilised in a small proportion of total crimes and this requirement would make data capture overly laborious, then focus only on forensic cases, but gather contextual information elsewhere on how many offences occurred in total instead.

Col. F:     Did forensics contribute to answering the question posed for this impact point?: Used to indicate if forensic science has positively contributed to answering the QP for the IP selected for the row in the context of that case. For example, if the IP selected is ‘Establish Crime Committed’, the question posed is ‘Can we determine if a crime has been committed?’ The answer to this question will be ‘Yes’ if either a crime has been committed and forensics contributed to determining this, or a crime has not been committed and forensics contributed to determining this. The answer to the question will be ‘No,’ if forensic science was unable to determine either way, or ‘N/A’, if forensics was not used.

Col. G:     Estimated time taken for forensics to answer the question posed (Days): Used to enter the number of days in which it took forensics to answer the question posed. This field also uses data validation to limit input to whole numbers, allowing values between 0-500, with “0” value indicating same day delivery of results. This enables standardisation of the data and a calculation of averages such as mean and median. If relevant to the study, time taken can be measured in hours, instead of days, but ensure that you make a consistent use of the same unit throughout the study, and do not use a mixture of hours for some IPs and days for others. This field is related to Forensic Timeliness (FT) metrics, and how quickly or slowly a forensic intervention took to lead to an impact. FT is measured from the point in time when forensic science is commissioned until the delivery of that evidence, so the time before the commissioning of the analysis is not counted here. This field relates to the impact point and question posed selected for the row.

Col. H:     Crime Type: Used to indicate the crime type for the referenced case. Responses can be chosen from the drop-down list prepopulated with common crime types, although further choices / refinements can be added to this list (see 2.2.2).

Col. I:       Date of Offence: Used to capture the date that the offence took place. The earliest and the latest of these dates specify the date range of the sample. This range is displayed in the summary table on the Analysis and Timeliness tabs, including any subsets of those created by using the slicers.

Col. J-L:       Impacting Factor 1, 2, 3: Update these headings with your chosen IF’s as set out in your PID. These three columns can be amended according to each study’s needs[footnote 5]. IFs describe the things that have a direct effect on forensic science and may affect either its ability to deliver results or timeliness. Keep IFs limited to factors that have binary or a limited number of variable responses. You will need to set their data validation rules if they differ from (y/n) (see 2.2.2).

Col. M:   Final CJS outcome: Used to capture the final case outcome. Can be used to filter the impact metrics and so look for associations between impact and outcome. Responses can be chosen from the drop-down list.

Col. N:          Assessment of Final Case Outcome: Used to categorise the final case outcome. This field is automatically populated depending on the choice that has been selected in column M. A list of outcomes at the right-hand side of the Dataset table enables the formula to return results.

2.1.2 Example 2 (Multiple Disciplines and/or non-forensics)

Col. A – H, AD – AH: Common in both examples and described in the section above (see 2.1.1).

Columns AD – AH in Example 2 are the same as columns J – N in Example 1.

Note: Columns I – AB are unique to Example 2 and refer to different forensic disciplines and non- forensic approaches used in each instance of impact. This allows an overall assessment of the impact of forensic science both collectively and by discipline.

Col. I, K, M, O: Use of specific forensic disciplines: Example 2 allows the capture of data relating to the use of up to four different defined forensic disciplines per instance of impact. These fields ideally require a binary (’Yes’ / ‘No’) response to whether each named discipline was used, which can be responded through drop-down lists. If there are data gaps or forensics was not used at all in that instance, answer ‘Unknown’ or ‘N/A’ respectively. Amend the named forensic disciplines in the column headings of Example 2 based on your PID.

Col. J, L, N, P: Contribution of specific forensic disciplines: These fields pose the question of ‘whether each of the forensic disciplines (named in columns I, K, M, O) contributed to answering the QPs, e.g. ‘Did Digital Forensics Contribute?’ (Col. J). These fields ideally require a binary (’Yes’ / ‘No’) response, from the drop-down list. However, if the discipline was not used or the outcome is unclear, answer ‘N/A’ or ‘Unknown’ respectively.

Note: If the response to the question in Col. E (‘Was forensic science commissioned relevant to this impact point?’) is ‘No’, answer ‘N/A’ to the above fields.

Col. Q: Did Forensics Contribute Overall?: This is an automatically populated field. If any of the four named forensic disciplines had a positive contribution to answering the QP, so any of columns J, L, N, P contain a ‘Yes’ for a specific instance, the value in column R will be ‘Yes’. If forensics was not used at all in an instance, it returns ‘N/A’, as it was not given a chance to contribute and, therefore, the assessment of its contribution is meaningless.

Col. R: Non-forensic approaches used?: This column is used to capture the use of non-forensic approaches for answering the QP, allowing a comparison between forensic and non-forensic interventions in terms of effectiveness, exclusive contribution and frequency of use. The answer to this question is ideally ‘Yes’ / ‘No’. However, select ‘Unknown’ in case of missing information or ‘N/A’ if the question is not applicable. If the answer is ‘No’, ‘Unknown’ or ‘N/A’, columns S-X, which refer to specific non forensic approaches, will be highlighted grey to remind users to input ‘N/A’.

Col. S, U, W: Use of specific non-forensic approaches: These fields are to capture data for up to three different non-forensic approaches per instance of impact. The fields contain a standard binary question, e.g. ‘Witness Account Used?’, which can be responded to using drop-down menus. The named non-forensic approaches of Example 2 can be amended according to those you have described in your PID. If there are data gaps or non-forensics was not used at all in that instance, answer ‘Unknown’ or ‘N/A’ respectively.

Col. T, V, X: Contribution of forensic discipline: These fields pose the question of whether each of the non-forensic approaches (named in columns T, V, X) contributed to answering the QP, e.g. ‘Did Witness Account Contribute?’ (Col. S). The answer to this question can be ‘Yes’ / ‘No’, ‘Unknown’ for unclear information or ‘N/A’, if no non-forensic approach was used.

Col. Y: Did Non-Forensics Contribute Overall?: This is an automatically populated column, which returns a positive (‘Yes’) value, if at least one of the three non-forensic approaches named in columns T, V, X had a positive contribution. If none of them contributed, the answer will be ‘No’. Like in column Q, if non-forensics was not used at all, it returns ‘N/A’.

Col. Z: Did Forensics Contribute Exclusively?: This is an automatically populated field that summarises the responses in two other two auto-populated fields, namely ‘Did Forensics Contribute Overall?’ (Col. Q) and ‘Did Non-Forensics Contribute Overall?’ (Col. Y).

Col. AA: Did Non-Forensics Contribute Exclusively?: This is an automatically populated column that summarises the responses in two other two auto-populated fields, namely ‘Did Forensics Contribute Overall?’ (Col. Q) and ‘Did Non-Forensics Contribute Overall?’ (Col. Y).

Col. AB: Did Both Methods Contribute?: This is an automatically populated field that summarises the responses in two other two auto-populated fields, namely ‘Did Forensics Contribute Overall?’ (Col. Q) and ‘Did Non-Forensics Contribute Overall?’ (Col. Y).

Note: Comparison between the contribution of forensics and non-forensics is only meaningful if both were used. If this condition is not satisfied, Z – AB fields show ‘N/A’.

Structure of Dataset tabs of Examples 1 and 2

Column Example 1 Example2
A Unique Row Identifier Unique Row Identifier
B (Anonymised) Case Reference (Anonymised) Case Reference
C Impact Point Impact Point
D Question Posed Question Posed
E Was forensic science commissioned relevant to this impact point? Was forensic science commissioned relevant to this impact point?
F Did forensics contribute to answering the question posed for this impact point? Estimated time taken for forensics to answer the question posed (Days)
G Estimated time taken for forensics to answer the question posed (Days) Crime Type
H Crime Type Date of Offence
I Date of Offence Digital Forensics Used?
J Impacting Factor 1: Update this heading with your selected impacting factor Did Digital Forensics Contribute?
K Impacting Factor 2: Update this heading with your selected impacting factor Fingerprints Used?
L Impacting Factor 3: Update this heading with your selected impacting factor Did Fingerprints Contribute?
M Final CJS outcome DNA Used?
N Assessment of Final Case Outcome Did DNA Contribute?
O   Toxicology Used?
P   Did Toxicology Contribute?
Q   Did Forensics Contribute Overall?
R   Non-forensic approaches used?
S   Witness Account Used?
T   Did Witness Account Contribute?
U   Circumstantial Evidence Used?
V   Did Circumstantial Evidence Contribute?
W   Alibi Used?
X   Did Alibi Contribute?
Y   Did Non-Forensics Contribute Overall?
Z   Did Forensics Contribute Exclusively?
AA   Did Non-Forensics Contribute Exclusively?
AB   Did Both Methods Contribute?
AC   Impacting Factor 1: Update this heading with your selected impacting factor
AD   Impacting Factor 2: Update this heading with your selected impacting factor
AE   Impacting Factor 3: Update this heading with your selected impacting factor
AF   Final CJS outcome
AG   Assessment of Final Case Outcome

Note: Sections 2.2 and 2.3 are only needed if you intend to adapt and change any of the Dataset fields in the DCAT. If that is not the intention, you can move on to section 2.4 on data capture.

2.2 Data Validation

2.2.1 Overview

Data validation is a feature that restricts (validates) the values that a user can input to a specific field or area of a worksheet. Data validation reduces the probability of human error, especially in large or complex datasets.

Data validation rules apply[footnote 6] to the Dataset tabs of both examples to ensure consistency in the way the data is captured by allowing only standardised responses. The data validation rules applied to the DCAT are summarised below:

Summary of Data Validation Rules of Examples 1 and 2

Column Example 1 Example 2 Data validation rule
A Unique Row Identifier Unique Row Identifier Whole number – Acceptable numeric values between 1-10000.
B (Anonymised) Case Reference (Anonymised) Case Reference Any value - New rule should be created reflecting the format of Case Reference.
C Impact Point Impact Point (Drop-down) List - Multiple items.
D Question Posed Question Posed No rules needed for auto-populated fields.
E Was forensic science commissioned relevant to this impact point? Was forensic science commissioned relevant to this impact point? (Drop-down) List - Binary response.
F Did forensics contribute to answering the question posed for this impact point?   (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
F   Estimated time taken for forensics to answer the question posed (Days) Whole number – Acceptable numeric values between 0-500.
G Estimated time taken for forensics to answer the question posed (Days)   Whole number – Acceptable numeric values between 0-500.
G   Crime Type (Drop-down) List - Multiple items[footnote 7].
H Crime Type   (Drop-down) List - Multiple items[footnote 8].
H   Date of Offence Date (DD/MM/YYY) – Start date: 01/01/2001
I Date of Offence   Date (DD/MM/YYY) – Start date: 01/01/2001
I   Digital Forensics Used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
J Impacting Factor 1: Update this heading with your selected impacting factor   (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’. (see 2.2.2).
J   Did Digital Forensics Contribute? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
K Impacting Factor 2: Update this heading with your selected impacting factor   (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
K   Fingerprints Used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
L Impacting Factor 3: Update this heading with your selected impacting factor   (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
L   Did Fingerprints Contribute? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
M   DNA Used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
N   Did DNA Contribute? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
O   Toxicology Used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
P   Did Toxicology Contribute? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
Q   Did Forensics Contribute Overall? No rules needed for auto-populated fields.
R   Non-forensic approaches used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
S   Witness Account Used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
T   Did Witness Account Contribute? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
U   Circumstantial Evidence Used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
V   Did Circumstantial Evidence Contribute? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
W   Alibi Used? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
X   Did Alibi Contribute? (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
Y   Did Non-Forensics Contribute Overall? No rules needed for auto-populated fields.
Z   Did Forensics Contribute Exclusively? No rules needed for auto-populated fields.
AA   Did Non-Forensics Contribute Exclusively? No rules needed for auto-populated fields.
AB   Did Both Methods Contribute? No rules needed for auto-populated fields.
AC   Impacting Factor 1: Update this heading with your selected impacting factor (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’. (see 2.2.2).
AD   Impacting Factor 2: Update this heading with your selected impacting factor (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
AE   Impacting Factor 3: Update this heading with your selected impacting factor (Drop-down) List – Responses: ‘Yes’, ‘No’, ‘Unknown’, ‘N/A’.
AF Final CJS outcome Final CJS outcome (Drop-down) List - Multiple items.
AG Assessment of Final Case Outcome Assessment of Final Case Outcome No rules needed for auto-populated fields
  • Avoid copying and pasting content into the Datasets, as it can disrupt or remove the data validation rules and, the quality of the captured data.
  • The Data Validation tab of each example holds data validation lists[footnote 9]. Each list has the same name as the field that it is related to. To locate which list sets out a specific validation rule, go to Data (menu tab) > Data Validation (button) > Data Validation (dialogue window) and click on the button with the arrow, which is found next to the Source field.
  • When using ranges (for example of numbers, dates, etc.) as a validation list, if the exact number of the variable is unknown then create sensible ranges first and use data validation for from the drop-down menu wherever possible. Document this in the PID.
  • Both Datasets include annotations, in the form of notes in the column headers and input messages or error alerts in cells, to minimise human error. These are there to guide a user and provide a quick guide to what is expected or required in a column / cell. If you wish to change the content of these messages, follow the same path to access the Data Validation dialogue window (as two bullets above), but go to the tabs Input Message or Error Alert. To amend or to delete a note, right click on the relevant header and choose from Edit or Delete Note.

2.2.2 Amending Data Validation Rules

To change validation rules, follow these steps:

Select the cells you want to apply the changes to and then choose:

Data (menu tab) > Data Validation (button) > Data Validation… (option)

  • Modify the options on the Data Validation dialogue window:
    • Allow: the type of values that can only be imputed,
    • In case of a list select the source, which is the range of cells where the list entries are found,
    • Input message (tab): The message shown when a cell is selected reminds users of the data validation rules that apply to the specific cell / column,
    • Error message (tab): The error alert message when invalid data is entered clarifies the type of data is valid for the specific cell / column.

Note: To amend the lists of acceptable options for a drop-down menu, go to Data Validation tabs, change the entries of a list, and then update the data validation rule of the relevant column, as described directly above. If you have extended or shortened the list, enter the new cell range in the source of the Data Validation dialogue window (as above).

  • To clear and remove data validation rules, select the cell range where you want to cancel the rules, go to the Data Validation dialogue window, and select Clear All (bottom right button).

2.2.3 Conditional Formatting for Data Validation Purposes

In addition to data validation, conditional formatting[footnote 10] rules also apply as a measure to restrict the responses that can be entered into columns / fields. Formulas check whether responses satisfy specific logic conditions and, if not, will blank out cells to prevent users from inserting invalid data or red-highlight cells as a warning. In detail:

  • In Dataset (Example 1), if a ‘No’ response is provided to the question ‘Was forensic science commissioned relevant to this impact point?’ (col. E), the following two cells (col. F, G) in the same row are highlighted grey to remind user to add ‘N/A’ to the first and to leave the latter blank.
  • In Dataset (Example 2), if a ‘No’ response is provided to the question ‘Was forensic science commissioned relevant to this impact point?’ (col. E), cells related to forensics in the same row (col. F, J-Q) in the same row are highlighted grey to remind user to add ‘N/A’ to the first and to leave the latter blank. This also applies to column G, but leave cells blank, if they are highlighted.
  • This action is replicated with columns T-Y in Dataset (Example 2), if the response to ‘Non- forensic approaches used?’ (col. S) is ‘No’, ‘Unknown’ or ‘N/A’.
  • In Dataset (Example 2), if there is a ‘No’ response in column E (‘Was forensic science commissioned relevant to this impact point?’) and ‘No’ or ‘Unknown’ or ‘N/A’ response in column S (‘Non-forensic approaches used?’) in the same row, both cells are highlighted red. An entry with neither forensic science nor non-forensic approaches may be attributed to incorrect data input. In that case, review the input data.

To view, change or delete conditional formatting rules, go to Home (menu tab) > Conditional Formatting (button) > Manage Rules (button) and a dialogue window pops up, and click on any rule you want to amend or delete.

2.3. Adapting the Dataset Tabs of the Template

The DCAT template is flexible and allows several adjustments in the Dataset tabs to adapt to the specifics of an individual study’s requirements. Both Dataset tabs are currently protected (locked) to prevent users from changing the contained formulas. If you need to adapt any of them to your project, unprotect it, edit it and finally protect it again[footnote 11]. Data validation rules can be amended as described in above (see 2.2.2) but the table structure can also change. It is easy to add and remove rows and columns[footnote 12] without affecting formulas and the rest of the template spreadsheet. The data tables of both examples are found in the range A6:L6006, with Row 6 containing table headers and filter buttons. Below row 6 there is a total of 6000 formatted rows which are ready to be used to input data. It is possible to insert, delete, resize, or rename columns and delete rows. Additionally, you can expand the formatted area of the table by highlighting the last two empty table rows and dragging the bottom right corner of the rectangle frame, as in the screenshot below. In this way, new rows keep the same formatting, formulae, and data validation rules, while numbering of Col. A (‘Unique Row Identifier’) is continued.

Amendments need to continue to adhere to the relevancy to the study objectives (measuring effectiveness and timeliness) and to the same data validity as described in previous sections.

Note: A clear dataset structure facilitates data capture and analysis. Therefore, users should avoid removing data validation rules, hiding columns / rows, deleting columns / rows, or merging cells. These actions may have multiple effects, e.g. clearing data validation, changing data capture logic, distorting analysis.

2.4 Data Capture

Both Dataset tabs form the place the user will manually capture data from casefiles and case management systems that describe an instance of potential impact. An instance represents each occasion, during the progress of a case when the CJS posed one of the IP questions and forensic science is given an opportunity to answer it. There is a potential for more than one instance of impact per case. There may be cases where an impact point is relevant multiple times creating multiple instances (rows) of data but equally an impact point may be relevant to one case in your data set but not another.

Ideally, capture all instances for every crime irrespective of whether forensic science was used or not to provide overall context to how often forensic science is delivering impact. If, however, forensic science was only utilised in a small proportion of crimes, you can focus only on those, while trying to gather the contextual information on how many offences occurred in total. Data Cleansing and Checks

Once you have captured a full dataset, before starting analysis, the it needs to be cleansed. Data cleansing is the process of detecting and correcting (or removing) corrupt, incomplete, or inaccurate entries. The list below, although non-exhaustive, summarises the most common errors that you need to check for:

1. Typographical errors, including mistakes related to response format and to numerical values. These errors can be eliminated by imposing stricter validation rules, if possible, or by using column filters[footnote 13] to spot any inconsistencies. Unusually high or low values may be an indication of a typographical error in a numerical field.

2. Missing data, either several blank cells within the same row or several blank cells or ‘N/A’ responses within the same column.

Note: If there are many answers ‘Unknown’ or ‘N/A’ the question at the top of the column needs to be reviewed as it may be irrelevant.

3. Duplicated rows should also be removed. To easily identify duplicates[footnote 14], use filters.

4. Logical errors, which lead to the lack of logical consequence within a row (contradicting information). They can be detected by conducting cross-column checks using filters.

5. Low counts of responses. Ensure when interpreting outputs, minimum sample sizes are considered and relevant statistical tests are conducted.

2.5 Automatically Generated Metrics

Both DCAT examples include an already prepared analysis, which can be updated with original data as soon as the (pre-existing) mock data of the Dataset is replaced.to do so, if you have not added new columns to the Dataset, simply Refresh All pivot tables (see 2.6.1):

1. Reposition any tables or charts that may expand after an update in the Analysis and Timeliness tabs.

Note: Pivot tables may not refresh properly without sufficient space to expand. Therefore, several rows and columns have been left blank to allow them to expand. However, if more space is required, re-arrange the tab by copying-pasting or dragging-and-dropping tables, charts and slicers.

2. Refresh pivot tables, charts and slicers to import the new data (see 2.6).

Following both steps above will complete all the analysis and the reports can be presented. Automatically generated metrics work only when the format of the Dataset, including data validation, is maintained. If there are structural changes in the Dataset, tables, charts, and slicers in all other tabs should be re-designed. The process to do this redesign is described in ‘Annex B: Prepare Analysis and Timeliness Tabs’.

2.6 Report Update and ‘Refresh’

2.6.1 Refresh Tables, Charts and Slicers

Following amendment to the Dataset, ensure that all linked pivot tables, charts, and slicers in the reports (Analysis and Timeliness) are refreshed to incorporate these changes. Right-click on the specific item that needs to be updated and select Refresh. To do the same for all items, click on a pivot table to activate the PivotTable Analyse menu and select Refresh.

2.6.2 Data Source and Table Updates

Pivot tables import and organise data from a selected cell range in the Dataset. If you add or delete any rows or columns from the Dataset, then change the Data Source as follows:

Click on the table to open the PivotTable Analyze menu > select Change Data Source > change

Table / Range in the pop-up window > press OK.

Pivot tables will update after every change of their Data Source, otherwise Refresh All tables (see 2.6.1).

Note: Before updating the reports, read the practical instructions carefully. They are found in yellow boxes next to ‘Header’ Tables and in the ‘Primary’ Pivot Tables area in all ‘Analysis’ and ‘Timeliness’ tabs.

3. Report structure

The tables below list which fields are the data sources for the pivot tables, charts, and slicers on each DCAT tab for each example. In addition, Analysis and Timeliness tabs should contain ‘Primary’ Pivot Tables (see B.2) below the main report area and a ‘Header’ Table (see B.3) at the top. The headings or the terms in purple should be replaced when the DCAT is used for a real study. The blue-highlighted cells refer to auto- populated fields.

3.1.1 Example 1

Field Analysis Tab: Pivot Table and Chart Analysis Tab: Slicer Timeliness Tab: Pivot Table and Chart Timeliness Tab: Slicer
Was forensic science commissioned relevant to this impact point? (Was forensic science used?) No Yes No Yes
Did forensics contribute to answering the question posed for this impact point? (Did forensics contribute overall?) (see B.1.5) Yes (Effectiveness metric)
Yes (Simple pivot table)
Yes No Yes
Impact Point Yes Yes No Yes
Crime Type Yes Yes No Yes
Timeliness metric) – based on the field ‘Estimated time taken for forensics to answer the question posed (Days)’ (see B.1.7) No No Yes (Timeliness metric) No
Impacting Factor 2: Update this heading with your selected impacting factor No Yes No Yes
Impacting Factor 2: Update this heading with your selected impacting factor No Yes No Yes
Impacting Factor 3: Update this heading with your selected impacting factor Yes Yes No Yes
Final CJS outcome Yes Yes No Yes
Assessment of Final Case Outcome Yes Yes No Yes

3.1.2 Example 2

Field Analysis Tab: Pivot Table and Chart Analysis Tab: Slicer Timeliness Tab: Pivot Table and Chart Timeliness Tab: Slicer
Was forensic science commissioned relevant to this impact point? (Was forensic science used?) No Yes No Yes
Impact Point Yes Yes No Yes
‘Estimated time taken for forensics to answer the question posed (Days)’ (see B.1.7) No No Yes (Timeliness metric) No
Crime Type Yes Yes No Yes
Did forensics contribute to answering the question posed for this impact point? Yes Yes No Yes
Digital Forensics Used? No Yes No No
Did Digital Forensics Contribute? (see B.1.5) Yes (Effectiveness metric) No No Yes
Fingerprints Used? No Yes No No
Did Fingerprints Contribute? Yes (Effectiveness metric) No No Yes
DNA Used? No Yes No No
Did DNA Contribute? Yes (Effectiveness metric) No No Yes
Toxicology Used? No Yes No No
Did Toxicology Contribute? Yes (Effectiveness metric) No No Yes
Did Forensics Contribute Overall? (see B.1.5) Yes (Effectiveness metric) Yes No Yes
Non-forensic approaches used? Yes Yes No Yes
Did Non-Forensics Contribute Overall? Yes Yes No Yes
Did Forensics Contribute Exclusively? Yes (Effectiveness metric)      
Yes (Overall metric) Yes No Yes  
Did Non-Forensics Contribute Exclusively? Yes Yes No Yes
Did Both Methods Contribute? Yes Yes No Yes
Impacting Factor 2: Update this heading with your selected impacting factor No Yes No Yes
Impacting Factor 2: Update this heading with your selected impacting factor No Yes No Yes
Impacting Factor 3: Update this heading with your selected impacting factor No Yes No Yes
Final CJS outcome Yes Yes No Yes
Assessment of Final Case Outcome Yes Yes No Yes

Annex A: acronyms and definitions

Term Acronym Definition
Criminal Justice System CJS  
Data Capture and Analysis Template DCAT Excel template containing a simple and a complex dataset examples and the analysis that can be utilised (and if necessary adapted) to undertake a study using the Impact Point Model.
Forensic Effectiveness Not applicable Count or percentage of instances where forensic science positively contributes to answering a question posed at an impact point.
Impact Point IP Occasion when forensic science has the capability to contribute during an investigation, charging decisions or the court process.
Impact Point Model IPM Model that assesses the impact of forensic science based on the study of independent variables, called ‘impact points’.
Impacting Factor IF Factor that could influence the impact that forensic science has. Capturing these factors allows impact metrics to be categorised. Examples of impacting factors included exhibit types, victim/offender relationships or whether a crime scene was indoors or outside.
Instances of Impact Not applicable Occasion while investigating or prosecuting a crime that the CJS poses one of the impact points questions. This should represent the opportunities that forensic science is given to contribute, for example for ‘eliminate suspect’ this should be each individual instance where an individual has been eliminated (or not) rather than a single entry per case.
Project Initiation Document PID The document that is used for designing a study. A Project Initiation Document template is provided in this toolkit.
Forensic Timeliness Not applicable Measurement of how quickly forensic science contributes or is unable to contribute at an impact point.
Question Posed Not applicable A figurative ‘question posed’ by the CJS to forensic science in relation to each Impact Point.

Annex B: prepare analysis and timeliness tabs

B.1. pivot tables and charts

B.1.1 Use

The Analysis tab summarises, combines and, provides descriptive metrics derived from the Dataset. The most relevant tools for doing so are pivot tables[footnote 15], which are a way to summarise data.

B.1.2 How to create a pivot table

To create a pivot table, go through the steps below:

  • Select the area of the primary data (in this case the table on the Dataset tab)
  • Go to Menu: Insert > Pivot Table > Create Pivot Table (pop-up window) select: OK. Choose the pivot table report to be placed on a New Worksheet, and it will be created as a new tab in the workbook.

Note: When you create the pivot tables for a Timeliness tab, make sure that the option Add this data to the Data Model is checked before you continue, as this enables the creation of a Box Plot Chart (see B.1.8).

3. Create a new table on the pivot table area, by dragging and dropping one field (e.g. ‘Crime Type’) into the Rows area and then the same field into the Ʃ Values twice in the PivotTable Fields[footnote 16] pane.

4. Since you have already created the pivot table, you need to rename the table headers. Click on the black triangle on the first field (‘Count of Crime Type’ in Ʃ Values), choose Value Field Settings from the drop-down menu and edit the Custom Name in the pop-up window. Give it the name ‘Count’ and press OK.

5. The second field (‘Count of Crime Type2’) should appear as percentage. First, follow the same steps as above, to rename it to ‘Percentage’. Second, in the Value Field Settings window, click on the Show Values As tab and choose % of Grand Total and press OK.

6. To change the Number Format (especially decimals), click the bottom left button in the Value Field Settings pop-up window, select Category > Percentage, change the number for Decimal places and press OK.

The pivot table should look like the example below.

Crime Type Count Percentage
Assault 5 25.0%
Criminal damage 1 5.0%
Homicide 3 15.0%
Serious sexual offences 1 5.0%
Serious violence 4 20.0%
Street robbery 6 30.0%
Grand Total 20 100.0%

Note 1: The menu PivotTable Analyze must be open whenever you wish to modify a pivot table[footnote 17]. To open it you can just click on any cell of the table. You can also format the table by opening the Design menu.

B.1.3 Pie Charts for Pivot Tables

Charts visualise the content of the pivot tables and are useful for presenting data in slideshows or reports. You can insert and edit a chart[footnote 18] as below:

  • Select the table area > go to the PivotTable Analyze menu > select PivotChart > choose Pie (first pie chart) from the Insert Chart pop-up window.
  • Delete the chart title and legend.
  • Delete chart fields: right click on one field (grey rectangular buttons in the chart area) > choose Hide All Field Button on Chart from the drop-down menu.
  • Choose chart labels:

Click on the chart > go to Design menu > click on Add Chart Element > choose Data Labels

option Data Callout.

B.1.4 Adding New Pivot Tables

To add another pivot table on the same tab, copy and paste the existing one to another area of the tab. By doing so you practically duplicate the table. Then, click on any part of the duplicate table, so that the Pivot Table Fields menu appears on the right-hand side. Choose another field e.g. uncheck ‘Crime Type’ and check ‘Question Posed’. In this way, data source is also copied automatically.

B1.15. Effectiveness Metrics

Effectiveness metrics is a category of pivot tables further divided in to two types:

  • Overall contribution of forensic science and
  • Contribution of each forensic discipline in answering the QPs related to the IPs.

The pivot tables of this chapter have a different layout, as they segment the sample by positive and negative responses, while they display count and percentage for each response and Impact Point.

The screenshot below is an example of the first effectiveness metric (overall contribution), which is used in both examples of the DCAT.

To create the same type of table, follow the process of creating or adding a new pivot table and add fields to the areas as the below screenshot suggests.

The second type of the effectiveness pivot table (screenshot below) is found only is the Analysis

tab in Example 2, as it refers to the contribution of each of the forensic disciplines separately. Although its structure is like the table of the first type, a slicer has been added to exclude instances when a specific forensic discipline was not used. The effectiveness score of each discipline is calculated based only on the instances when it had a real chance to contribute.

The slicers on the top of these tables mirror the ones in on the left-hand side of the tab (Column A).

B.1.6 Effectiveness stacked bar charts

The effectiveness charts are stacked bars and 100% stacked bars, so they display the absolute measures as well as the relative effectiveness as a percentage. However, you cannot choose which columns to include in the chart from the linked pivot table. To split counts and percentages to different charts, copy the initial pivot table twice in the ‘Primary’ Pivot Table area and keep only the count columns in the first and the percentage columns in the second. These are the data sources (‘root tables’) for each of the charts you will create. This choice is made through Value Field Settings (see B.1.2).

This process is followed for both types of effectiveness metrics, those referring to the overall contribution of forensic science as well as those referring to the contribution of individual forensic disciplines (see B.1.5). A pair of ‘root tables’ should be created for each of the forensic disciplines prior to the creation of the stacked bar charts, like in the screenshot below.

Finally, select each table and Insert a chart, as described above (B.1.3), but choose Stacked bars and 100% stacked bars.

B.1.7 Timeliness Metric

A box plot chart[footnote 19] is used for visualising Timeliness metrics, as it is shows quartiles, average (mean), median, standard deviation, minimum and maximum value as well as the outliers. An outlier is an observation that lies an abnormal distance from other values in a random sample from a population.[footnote 20] Box and whiskers in a chart show distribution of data into quartiles, highlighting the mean and outliers. This is a complex representation of data and before creating a pivot table and an attached chart, preparation is required:

  • Start Power Pivot[footnote 21]: Go to File > Options > Add-Ins >COMM Add-ins > select Microsoft Power Pivot for Excel.
  • Go to Power Pivot > New measure and add the following measures for each DCAT example separately:
Measure (Name on Power Pivot) Syntax:
Ex1 Count =COUNT(‘Example1’[Estimated time taken for forensics to answer the question posed(Days)])
Ex2 Count =COUNT(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)])
Ex1 Lower Quartile =PERCENTILE.INC(‘Example1’[Estimated time taken for forensics to answer the question posed (Days)],0.25)
Ex2 Lower Quartile =PERCENTILE.INC(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)],0.25)
Ex1 Upper Quartile =PERCENTILE.INC(‘Example1’[Estimated time taken for forensics to answer the question posed (Days)],0.75)
Ex2 Upper Quartile =PERCENTILE.INC(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)],0.75)
Ex1 Average =AVERAGE(‘Example1’[Estimated time taken for forensics to answer the question posed (Days)])
Ex2 Average =AVERAGE(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)])
Ex1 Median =MEDIAN(‘Example1’[Estimated time taken for forensics to answer the question posed (Days)])
Ex2 Median =MEDIAN(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)])
Ex1 Standard Deviation =IFERROR(STDEV.S(‘Example1’[Estimated time taken for forensics to answer the question posed (Days)]),0)
Ex2 Standard Deviation =IFERROR(STDEV.S(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)]),0)
Ex1 Minimum =MIN(‘Example1’[Estimated time taken for forensics to answer the question posed (Days)])
Ex2 Minimum =MIN(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)])
Ex1 Maximum =MAX(‘Example1’[Estimated time taken for forensics to answer the question posed (Days)])
Ex2 Maximum =MAX(‘Example 2’[Estimated time taken for forensics to answer the question posed (Days)])
Ex1 *Box 1 (Hidden) =[Ex1 Lower Quartile]
Ex2 *Box 1 (Hidden) =[Ex2 Lower Quartile]
Ex1 *Box 2 =[Ex1 Median]-[Ex1 Lower Quartile]
Ex2 *Box 2 =[Ex2 Median]-[Ex2 Lower Quartile]
Ex1 *Box 3 (Upper) =[Ex1 Upper Quartile]-[Ex1 Median]
Ex2 *Box 3 (Upper) =[Ex2 Upper Quartile]-[Ex2 Median]
Ex1 *Whisker Bottom =[Ex1 Lower Quartile]-[Ex1 Minimum]
Ex2 *Whisker Bottom =[Ex2 Lower Quartile]-[Ex2 Minimum]
Ex1 *Whisker Top =[Ex1 Maximum]-[Ex1 Upper Quartile]
Ex2 *Whisker Top =[Ex2 Maximum]-[Ex2 Upper Quartile]
  • These new measures create new fields, which can be found in the PivotTable Fields menu and contain the indication fx (function) before their name, e.g. fx Ex2 Count.
  • The new pivot table for Timeliness can be placed on a separate tab. Generate Timeliness metrics by creating a pivot table in the ‘Primary’ Pivot Table area of the tab by adding the ‘Impact Point’ field to Rows and only the fields ‘Box 1 (Hidden)’, ‘Box 2’ and ‘*Box 3 (Upper)’ to Ʃ Values

  • Copy the pivot table to the main report area at the top of the tab and add the rest of the newly created measures to Ʃ Values. The new table should look like the example below.

B1.8 Timeliness Box Plot Chart

  • To make a Box Plot Chart for Timeliness, go to the ‘Primary’ Pivot Table (which was initially created) and select the table area. Then go to PivotTable Analyze menu > PivotChart > select Column > Stacked Column and press OK.
  • Set a meaningful value for the minimum and the maximum vertical axis bounds as it will improve the layout / height of the chart bars. To do so you need to visit the formatting[footnote 22] options in the Format task pane (right click on the chart > Format Chart Area from the drop-down menu).
  • Move your chart underneath the wide pivot table at the top of the tab.
  • Add error bars at the top boxes (grey): select top boxes > menu Design > Add Chart Element

Error bars > More Error Bars Options… > opens Format Error Bars pane > select Vertical Error Bar: Direction: Plus. Choose Error Amount: Custom. Press Specify Values and enter the values in the ‘*Whisker Top’ column of the pivot table (excluding the Grand Total) in the Positive Error Value field of the pop-up window.

  • To add the error at the bottom boxes (blue) the process is similar: select bottom box > menu Design > Add Chart Element > Error bars > More Error Bars Options… > opens Format Error Bars pane > select Vertical Error Bar: Direction: Minus. Choose Error Amount: Custom. Press Specify Values and enter the values in the ‘*Whisker Bottom’ column of the pivot table (excluding the Grand Total) in the Negative Error Value field of the pop-up window.
  • After adding the error bars, format the bottom boxes by right-clicking on them and select Fill: No Fill and Outline: No Outline from the drop-down menu.
  • For the rest of the boxes (orange and grey), fill them with a light grey colour and a distinct outline.
  • To add average markers: Go to the source table of the chart (the small ‘Primary’ Pivot Table with only three fields), click on it to make the PivotTable Fields pane visible and add the field ‘fx Mean / Average’ to the Ʃ Values area. The field has been added to the table as well as to the chart as another box, over the light grey boxes. Right-click on the new box > select Change Series Chart Type… from the drop-down menu > choose the chart type and axis for your data series > check ‘Mean / Average’. Then select Line with Markers from the drop-down menu.
  • As a result, a yellow line will go through the bars.

Note: Make sure that a secondary axis has not been added, which can result into distortions in the graph.

  • Finally, right-click on the line and select No Outline from the drop-down menu. The chart must look like the one below.

B.2. ‘Primary’ Pivot Tables

Create the ‘Primary’ Pivot Tables, namely ‘Number of Cases’, ‘Number of Impact Points’ and ‘Date Range’, which the ‘Header’ Table is linked to (see B.3). The position of all the three tables on the Analysis tab helps user to better manage connections between them and slicers or other pivot tables. For aesthetic reasons ‘Primary’ Pivot Tables can be hidden below the report. You also need to arrange your tab layout, so that you leave enough space between the tables and they do not overlap when they expand after updates. The following sections will describe the steps for creating the ‘Primary’ Pivot Tables.

B.2.1 Number of Cases

Creating a ‘Primary’ Pivot Table for the number of Cases requires the following actions:

  • Follow steps 1-3, as described above (see B.1.2) and drag and drop the field ‘Anonymised Case Reference’ into Rows and Ʃ Values. For this purpose, only drag and drop the field once in the Ʃ values area.
  • Go to ‘Count…’ in Ʃ values area > open the Value Field Settings window > go to the Summarize Value Field By tab > select Count > rename the field and choose (type of calculation) Count (not Sum that may have been pre-selected).
  • Then open the second tab Show Values As and select Show values as: Index.
  • In the row over the pivot table, add the title ‘Number of Cases’ and the below formula right next to it, with selecting the entire area of values (cell range) of the table:

 =IF((COUNTIF(B23:B31,1)-1)<0,0,(COUNTIF(B23:B31,1)-1)) [footnote 23]

The formula counts the number of unique cases in the table, which is useful, especially in large datasets. This number will be used for the ‘Header’ Table (see B.3).

B.2.2 Number of Impact Points

To create a ‘Primary’ Pivot Table for the number of Impact Points, drag and drop the ‘Impact Point’ field into the Ʃ values area, like in the screenshot below.

Note: Although the main (red-coloured) primary tables, displaying the number of cases and the impact points, are connected to all slicers (see B.4), the duplicate (green-coloured) tables are not connected to them. The latter are used as denominators for the calculation of the percentages in the ‘Header Tables’ (see B.3).

B.2.3 Date Range

The date range is required for the ‘Header’ Table (see B.3), as it sets out information relating to the sample selected for the study. To form a Date Range table, as in the below screenshot, follow these steps:

  • drag and drop the ‘Date of Offence’ filed to the Ʃ values area and go to the Value Field settings and Summarize Values by Min.
  • Repeat the same process to create a similar table but Summarize Values by Max this time.

This combination of tables, takes only limited space on the worksheet, while figures are easily linked to ‘Header’ Tables.

B.3 ‘Header’ Tables

The ‘Header’ Table is the top table on all Analysis and Timeliness tabs, which provides the total number of cases and impact point instances shown in the pivot tables, together with their date range. The first two values are also expressed as percentages. When slicer filters apply, the numbers may change, as slicers segment the sample, so the values of the ‘Header’ Table change accordingly. This information helps users assess whether the volume of responses is adequate and be alerted when slicer filters have resulted in a very small sample set. The ‘Header’ Table is a dynamic element, containing several formulas and links.

  • Count of Number of Cases / Impact Points: Copy the ‘Number of Cases’ and the ‘Number of Impact Points’ from the relevant ‘Primary’ Pivot Table underneath the main report area and paste them to the ‘Header’ Table using Paste Link[footnote 24]. When the source table changes due to the application of a slicer filter or amendments in the dataset, the figure of the ‘Header’ Table will be updated accordingly.

  • Percentage of Cases / Impact Points: Calculation that picks the ‘Number of Cases’ or the ‘Number of Impact Points’ and divides it with the overall number of cases or impact points. The cell is formatted as percentage[footnote 25]. When slicer filters apply showing only a subset of cases, the percentage of the displayed cases updates automatically.

  • Date Range (cells D5:E5 - merged): Use the TEXT formula to import the earliest and the latest dates as below.

 = TEXT(G256,”dd/mm/yy”)&” - “&TEXT(G257,”dd/mm/yy”)

B.4 Slicers

B.4.1 The Use of Slicers

Slicers26 are objects used for applying filters to pivot tables and pivot charts to display a specific subset of the data. Slicers provide the criteria upon which you can segment your data and associated analysis.

The following screenshots provide an example of an Analysis report from the DCAT, which shows the ‘Header’ Table and the ‘Effectiveness Metric’ before and after the application of a slicer filter. In the second screenshot, the sample has been sliced by ‘Crime Type’, so only cases of ‘Violence with injury’ are displayed. As a result, the effectiveness figures are different as well as the figures in the ‘Header’ Table.

B.4.2 Adding and Managing Slicers

Slicers are derived from the columns of your dataset. You can add as many slicers as you need for your analysis assuming each of them has been captured as a column of data with finite (ideally limited) response options. To add a slicer, follow these steps:

  • Click on a pivot table to activate the PivotTable Analyze menu and select Insert Slicer. Make sure that you have clicked on a pivot table, so the dialogue pop-up window (screenshot below) will show all the relevant fields, which can be used for slicers. Select the slicers you need and press OK.
  • To manage slicer settings, right-click on the slicer and choose Slicer Settings from the drop- down menu. You can change the Caption, which is the title at the top of the slicer box, you can also sort items (values displayed) or Hide items with no data.
  • To manage the connections of each slicer, click on a slicer, go to the Slicer menu, and select Report Connections. A pop-up window appears where all pivot tables of the spreadsheet are listed, categorised by tab. Choose all the tables of the tab where the slicer is positioned, including the ‘Primary’ Pivot Tables. It is important that the latter are chosen. However, do not choose tables that belong to a different tab.
  • To format each slicer, click on one or multiple slicers and go to the Slicer menu, set the size of the slicer (Height, Width), the number of Columns in which its buttons are distributed, Align Left and Distribute Vertically and Group the slicers together. Finally, if required, pick a style and change colours.

Note: Slicers may reduce sample sizes to very low counts. This will have implications for comparing results and making inferences. Ensure when interpreting outputs, minimum sample sizes are considered and relevant statistical tests are conducted.

  1. For term definitions and acronyms, see ‘Annex A: Acronyms and Definitions’. 

  2. You can find an overview of formulas in Excel from Microsoft and how to protect formulae here. 

  3. You can find instructions on how to automatically number rows from Microsoft (do not use ROW function). 

  4. To change the options on the dropdown list, see 2.2.2. The same applies to all fields as they follow data validation rules, apart from columns A and B, where not such rules apply. 

  5. Details on how to determine IFs and the binary or limited responses you will collate are found in the PID. 

  6. You can find instructions on how to apply data validation to cells from Microsoft. 

  7. The validation list of offences is based on the Government Offence Classification Index, with minor modifications based on the Crime Type Definitions list issued by Met Police. 

  8. The validation list of offences is based on the Government Offence Classification Index, with minor modifications based on the Crime Type Definitions list issued by Met Police. 

  9. You can find instructions on how to create a drop-down list here. 

  10. You can find instructions on how to use conditional formatting to highlight information here. 

  11. You can find instructions on how to lock formulas here. To unprotect a tab: File (menu) > Info > Protect Workbook > click Unprotect (next to the relevant tab). 

  12. You can find instructions on how to insert or delete rows and columns here. 

  13. You can find instructions on how to filter data in a range or table here. 

  14. You can find instructions on how to find and remove duplicates here. 

  15. You can find instructions on how to create a PivotTable to analyze worksheet data here. 

  16. The PivotTable Fields pane appears either by clicking on a pivot table or by pressing the Field List button in the PivotTable Analyze menu. 

  17. You can find instructions on how to design the layout and format of a PivotTable here. 

  18. You can find instructions on how to create a chart with recommended charts here. 

  19. You can find instructions on how to create a box plot chart here (recommended), here and here. 

  20. Aguinis, H., Gottfredson, R.K. and Joo, H., 2013. Best-practice recommendations for defining, identifying, and handling outliers. Organizational Research Methods, 16(2), pp.270-301. 

  21. You can find instructions on how to start the Power Pivot add-in for Excel here. 

  22. You can find instructions on how to format elements of a chart here. 

  23. Cell numbers or cell ranges in blue are indicative. They may need to be reviewed when the formulas are copied to a spreadsheet. 

  24. You can find more information on paste options here. 

  25. You can find instructions on how to format numbers as percentages here.