1 Welcome

1.1 How to use this document

This Handbook is part of the Environs application and any associated training that is being delivered. It is integrated into the application but can also be downloaded as a separate document to facilitate its use.


This Handbook can be read through from start-to-finish but has been designed so that each section can be navigated to, and read independently - serving as an aide-mémoire whilst carrying investigations in the App.

1.2 Getting help

Contact Email Supporting.
Maria Orenbuch Support, licensing and sales
Savas Hadjipavlou Statistical analysis, bespoke versions

1.3 What problem are we solving?

Over the past twenty years a mountain of information has been published by government and other agencies and is readily available on-line. However there are several obstacles to its effective use:

  • The sheer volume. Environ’s database is generated through preprocessing around one TB of data to a more manageable 50 GB, serving as the foundation for analyses.

  • The data are very often collected and published in silos. Connections between them are not transparent (different names for similar/same things), and often can only be derived through indirection (using intermediary information to step between two sets of data, for example). This is explored in more detail later in this document, using some important datasets such the index of multiple deprivation, including the limitation that may apply in the use of the data and interpretation.

  • A high level of data analytical skills is needed to transform the data into useful information and insights. Such skills are not readily available.

Environs is able to slice through this Gordian knot, pushing the boundary from “how do I do it ?” to “what does the data tell me and what does it mean ?”. In particular it allows for information to be shared between sectors, supporting the joint consideration needed for the planning and delivery of cross-cutting, coordinated services.

Environs uses a range of sophisticated analytical techniques to integrate, explore and visualise a wide range of public, open-source data covering demographic, social, health, economic and other sectors. Its aim is to generate quantitative insights into the relationships between relevant factors. Users have great flexibility to specify geographic areas and to make comparisons that allow hypotheses to be formulated, validated (or not) and conclusions drawn. A forecasting component allows for potential future change to be explored, and so contributing to local , regional and national planning for resource allocation and service provision.

1.4 What data are in Environs?

Environs includes the following major categories1 of data for England and Wales2 :

  • Administrative geographies and associated boundaries.

  • Population data by age and sex, at output area (OA) and lower super output area (LSOA)3 - section[1]

  • Indices of deprivation4 incorporating a range of factors and providing scores and ranking by LSOA - section[2]

  • General Practitioner (GP) level data, levels of patient registration by home area (LSOA), age and gender, as well as prescription levels for a range of indicator conditions5. GP location relative to the populations they serve are also included - section [3]

  • Crime and anti-social behaviour data, as reported by the police forces in England and Wales, covering a range of categories and individual location or LSOA levels - section [4]

  • Prosecutions and convictions (Police force area level) section [5]

  • [Sentencing] - section [6]

  • House sales capturing the volume of transactions and prices paid by type of property - section [7]

  • Fire incidents - section [8]

  • Tax Credits - section [9]

  • Power utilisation - section [10]

  • Social and geographic character of Output areas - section [11]

  • Census 21 categories- economic activities/travel to work/ employment - section [12]

  • Schools/ education - section [13]

Population (2021) is available by single year of age and by gender, by small area (Lower super output area) Area classification descriptors, based on Census 2011 ‘sub-group’ categories providing a ‘pen picture’ at small area level Indices of deprivation (2019), covering ‘combined’ as well as individual level indices describing a national ranking of each small area Family tax credits by number of families or dependent children, providing information e.g., about the number of children in low-income households Crime levels drawn from the data.police.uk monthly database, focusing on anti-social behaviour, drugs, burglary, robbery, violence or sexual offences and criminal damage, providing indicators for community safety by area Fire and rescue service incidents – by number over 2018-2021. This provides indicators for fire and rescue service demands across the selected geography

1.5 Examples

Question and answer examples:

1.6 More detailed background

Explain data and meaning by section

1.7 Deprivation indices

England, Wales have similar but not identical processes for assessing the levels of deprivation in their respective jurisdictions.6

The Indices of Deprivation (IoD) are measures that describe the relative deprivation at LSOA geographic resolution They have periodically been produced or updated since 2000. The latest versions for England and Wales are for 2019.

While the processes and objectives are similar in the two jurisdictions they are not identical. The indices in England cover seven different domains:

• Income
• Employment
• Education, Skills and Training
• Health Deprivation and Disability
• Crime
• Barriers to Housing and Services
• Living Environment

In Wales the domains are described as:

  • Income
  • Employment
  • Education
  • Health
  • Housing
  • Community safety
  • Physical environment

Both jurisdictions publish additional domains. However for Environs we have selected the above as being of most general use and broadly congruent and so provide a consistent set of variables for use in analysis. We have not compared the methodologies used in the two jurisdictions (more about this below) to generate the indices. So, cannot confirm, at this stage, whether comparisons between areas in England and in Wales are reliable. In particular it should be noted that the rank scales are different: 1 (most deprived) to 32844 (least deprived) in England; and , 1 (most deprived) to 1909 (least deprived) in Wales. In Environs, filters for the ranking are set to 1 %, that is the binning resolution or interval at which LSOA areas are put in any analysis. We might hope that comparisons across the two jurisdictions, in so far as the involve deprivation rankings, may be considered as suggestive. However, there is not, to our knowledge, any certainty as to how, say the most deprived 10% scores would objectively map into the English set of rankings, so that the lowest 10% in each jurisdiction would be broadly equivalent.

In Environs the following common nomenclature is used:

  • IMD 2019 rank - the combined rank index as formulated within England or Wales as appropriate,
  • Income rank
  • Employment rank
  • Health rank
  • Education rank
  • Housing rank
  • Community Safety or crime rank
  • Physical Environment rank

…..

1.8 Why Data Analysis and Forecasting Matter

UK policing faces multiple strategic challenges: rising demand, increasing complexity and greater scrutiny of its performance. Chief constables and their teams need a richer understanding of where and how their officers, staff and resources are used, how productive they are, and what outcomes they achieve. They need to:

  • understand the size and shape of workload, now and in the future.

  • build capacity, capability and resilience across their workforce.

  • deploy resources as effectively and efficiently as possible.

  • to evidence the outcomes they achieve - to local and national partners, to inspectorates, and to their communities.

Using bespoke software and expert analysis, Poliscope modelling gives Forces:

Benefits of Poliscope Leading to…
Deeper operational knowledge to inform deployments, priorities, use of technology and workforce planning. More assurance when taking strategic decisions
Compelling evidence for performance frameworks and other statutory requirements e.g. PEEL inspections. Evidence-based forecasting for Force Management Statements
More robust information for financial planning, investment and business case development. Greater chances of successful funding bids
Actionable insights to engage and inform stakeholders, partners and communities. Higher levels of trust, confidence and influence

2 Good to Know & Key Concepts

2.1 An Overview of Forecasting

2.1.1 What is Forecasting?

Forecasting is about using observations regularly collected about the past in order to build a picture of history that can then be used to make predictions about the future.

Forecasting is an invaluable tool widely used across a vast number of industries across the world; from the weather and meteorological services, to the manufacturing sector which must consider consumer trends, seasonality and the availability of raw materials.

2.1.2 Forecasting Timeframes

The time frames used in forecasting depend on the types of decisions that need to be made and can vary from a few hours and days, to weeks, months and years.

For example:

Typically, the Poliscope Model can calculate forecast results up to four years ahead, with monthly data views (intervals), in order to support a Police Force’s medium term strategic planning and decision making. The results can also be aggregated up to produce quarterly, semi-annual and annual data views; and produce a time series with few if any gaps, even for offence types that are low in volume.

2.1.3 What Is Needed For Good Forecasting

Forecasting can be straightforward or very difficult and widely different for each system with varying levels of predictability (see Figure below). However good quality forecasting needs:

  • the availability of good quality data in sufficient volume, in order to describe the processes of the system/ processes/ environment that is being forecasted.

  • the system/ processes/ environment that is being forecasted must be well understood including the relationships between the various factors/variables that make up the system/processes/environment.

  • identification of any feedback loops between actions based on forecasts and future measurements.

  • Identification of key patterns and relationships in the historical data that have an enduring effect going forward. Conversely, excluding from forecasts those one-off events whose effect will diminish quickly with no long-lasting impact.

Forecasting is as much judgement as science. The Poliscope Model puts the user in control with:

  • the science guiding the user’s judgement by developing a model based on the historical data.
  • the results guiding the user’s thinking about how the future will most likely unfold and the practical consequences of that.

2.1.4 Forecasting Methods Used In The Poliscope Model

The Poliscope Model makes use of a number of forecasting methods in combination, and each has their own strengths and weaknesses. These are set out in the table below.

Forecasting Method Description / Summary
ARIMA The ARIMA methods incorporate autocorrelations in Non-seasonal time series that exhibit patterns and are not random white noise can be modelled with ARIMA models. Used in the ‘Combination’ approach - see below. the data (a measure of the relationship between observations as a function of how far apart they are from each other).
ETS Forecasts use exponential smoothing methods where weighted averages of past observations are used, with the weights decaying exponentially as the observations get older. More recent observations make a larger contribution than old ones. Used in the ‘Combination’ approach.
STLM This method handles trend and seasonal behaviour. The method looks for trends over time; For example, are the values going up or down in a consistent fashion, and is there any cyclical behaviour where a pattern repeats. Loess is a method for estimating non-linear relationships. Used in the ‘Combination’ approach - see below.
TBATS This method is similar to STL but is capable of modelling time series with multiple seasonalities, making it more robust against short sequences of observations. Used in the ‘Combination’ approach - see below.
NNETAR The ‘nnetar’ forecast method fits a feed-forward neural network with a single hidden layer and lagged inputs for forecasting univariate time series. It is a nonlinear autoregressive model, and it is not possible to derive prediction intervals analytically. These are generated through simulation so the method can be time consuming.
SNAIVE This method takes the most recent observation as representative of future values. This effectively assumes that the system has no memory of what has happened in the past and the latest value is the best guide of what is likely to happen in the future.

In certain parts of the Model (see Section 3), the user can change the starting point of the forecast from the most recent available value. This feature allows the user to make a comparison of the forecasted results against recent actual observations. See Section 3.3.3 for detailed instructions.

2.2 Understanding the Graphs

Each graph includes labelled axes and a ‘legend’ that explains what each plot line represents.

2.3 Key Functions And How to Use These

The following key functions are available in various parts of the Poliscope Model.

2.3.1 Downloading

Across the Model, the user is able to download data for examination outside the Poliscope Model (as required). To do this:

Step 1. Click the “Download” button to save a copy of the data to the user’s computer. The data is available in .xlsx format.

2.3.2 Selecting the forecasting methodology

In certain parts of the Model, the user can choose the forecasting methodology used (see Section 3.1.4 above for descriptions of the available methodologies). To do this:

Step 1: Select the chosen forecasting methodology.

Note - Selecting more than one forecasting methodology will provide a hybrid model.

2.3.3 Adjusting the forecasting time frame used by the simulation

In certain parts of the Model, the user can change the time frame used for forecasting. This includes:

  • Changing the starting point of the forecast from the most recent available data/ data values to an earlier date/data values. This allows a comparison between actual historic data and the forecast.
  • Selecting how far into the future (from the chosen starting point) the forecast should project.
  • Selecting how many actual historic data points are used by the model to forecast.

To do this:

Step 1. Adjust the left “Forecast thresholds” slider to set how many months earlier (than the latest date/data values) the forecast starting point should be.

Step 2. Adjust the right “Forecast threshold” slide to choose how many months into the future (from the chosen starting point) the forecast should project.

Step 3: Choose the number of actual historic data points (prior to the chosen start date set in Step 1) that are to be used by the model to generate the forecast.

For [example:\\](example:\ Based on the forecast settings shown in the figure above, the user has selected:

  • A start date 24 months prior to the latest available data / date;
  • To forecast 36 months into the future from the chosen start date;
  • To use the last 6 available data points prior the the chosen start date.

2.3.4 Simplifying the time frame data used by the simulation

The user has the option to simplify the historical data by applying an average. To do this:

Step 1: Select ‘Is active?’ to confirm the time series values are to be simplified.

Step 2: Using the slider, select the start and end dates for the time period to be simplified.

Step 3: Choose the simplifying approach from either:

  • The average of the data to the left of the date selection (i.e. all earlier dates available).
  • The average of the data to the right of the date selection (i.e. all the more recent dates).
  • A specific user defined value.

2.3.5 Adjusting the data time frame

The user is able to adjust the data time frame on the on-screen graphs. To do this:

Step 1. Click the ‘time frame’ box.

Step 2. Enter the time frame value required. Increasing the data time frame value will aggregate together the available data to give the desired time frame. The graph will appear smoother as a result.

  • A value of [1] means monthly data will be displayed;
  • A value of [3] means quarterly data will be (aggregated up from monthly and) displayed. Other values can also be used.

2.3.6 Inspecting selected areas of the graphs

Across the Model, the user is able to inspect selected areas of the on-screen graphs. To do this:

Step 1. Click and drag the mouse across the area to be zoomed in and inspected.

Step 2. Double click the mouse on the graph to reset.


2.4 Future Improvements/ Developments

The Model is kept under review and will be upgraded periodically to incorporate user feedback and as new functionality is developed. Note that certain upgrades will be dependent on a maintenance and support contact being in place.

3 Using the Poliscope Model

3.1 Getting Started

Logging In

The Model is hosted as a web-based ‘Shiny Application’.

Access to the Model is restricted to those with appropriate login credentials. To apply for login contact

Step 1. Enter your username into the “Email” box and press ‘Continue’.

Step 2. Enter your password into the “Password” box. Ensure that this includes a combination of lower case letters, upper case letters, numbers and symbols.

Step 3. Select “Log in” to log into the Model.

Logging Out

Step 1. Click the “Logout” button.

Note – Closing the browser with the ‘Close App’ button does not automatically log out of the Model.

3.2 Overview of the Model

The Model is organised through a hierarchy of tabs covering different aspects of data exploration and analysis. Some tabs, when selected, expand out to a range of other sub-tabs that allow further analysis.



3.3 FMS categories

Purpose:

In this tab, the user can explore the analysis of call origin and volumes.

Note - FMS categories are non-composable by design i.e. they cannot be combined meaningfully. When making a selection, choose one type only.

How to Use:

Step 1. Select one “FMS category” from the selection menu.

Step 2. Select the “Forecasting thresholds”. See Section 3.3.3 for detailed instructions.

Step 3. Select the “Forecasting Model Components”. See Section 3.3.2 for detailed instructions.

Step 4. Where desired, simplify the time frame date using the “Manipulate time series values” options. See Section 3.3.4 for detailed instructions.

Step 5. Select ‘Run Forecast’ to generate the analysis.

The data time frame can be dynamically adjusted on-screen by selecting the appropriate monthly interval. See Sections 3.3.5 and 3.3.6 for full instructions.

The data can be downloaded to the user’s computer for examination outside the Poliscope Model, as required. See Section 3.3.1 for full instructions.

Output Produced:

The FMS categories tab produces a “case volume analysis” graph which shows historic and forecast (projection) case volumes over the selected time frame.

3.4 Missing Persons

Purpose:

In this tab, the user can explore missing person incident volumes based on ‘Compact’ data.

How to Use:

Step 1. Select the upper and lower ‘Age Range’ boundaries using the slider menu.

Step 2. Select the required ‘Risk Level’ using the option menu.

Step 3. Select the required ‘Attended’ using the option menu. Selecting both ‘Yes’ and ‘No’ will include all cases.

Step 4. Select the “Forecasting thresholds”. See Section 3.3.3 for detailed instructions.

Step 5. Select the “Forecasting Model Components”. See Section 3.3.2 for detailed instructions.

Step 6. Where desired, simplify the time frame date using the “Manipulate time series values” options. See Section 3.3.4 for detailed instructions.

Step 7. Select ‘Run Forecast’ to generate the analysis.

The data time frame can be dynamically adjusted on-screen by selecting the appropriate monthly interval. See Sections 3.3.5 and 3.3.6 for full instructions.

The data can be downloaded to the user’s computer for examination outside the Poliscope Model, as required. See Section 3.3.1 for full instructions.

Output Produced:

The Missing Persons tab produces a “case volume analysis” graph which shows historic and forecast (projection) case volumes over the selected time frame.

3.5 Crime Analysis

Purpose:

In this tab, the user can explore the volume (with forecast) of crimes, team allocations and outcomes.

How to Use:

Step 1. Select the Crime type from the option menu.

Step 2. Select any required “Flags” from the radio/option button menu.

Step 3. Select the “Investigation Outcome” from the option menu.

Step 4. If choosing to investigate volumes by teams:

  • Turn on the ‘Teams Filter On?’ button.
  • Select the team structure required using the ‘Team Selector’ radio option menu.
  • Choose the Team of interest from the option menu.

Step 5. Select the “Forecasting thresholds” . See Section 3.3.3 for detailed instructions.

Step 6. Select the “Forecasting Model Components”. See Section 3.3.2 for detailed instructions.

Step 7. Where desired, simplify the time frame date using the “Manipulate time series values” options. See Section 3.3.4 for detailed instructions.

Step 8. Select the ‘Plot as % w.r.t Team Selection’ option button if the output is to be displayed in percentage terms.

Step 9. Select the ‘HO reportable?’ option button to filter for Home Office reportable output.

Step 10. Select ‘Run Forecast’ to generate the analysis.

The data time frame can be dynamically adjusted on screen by selecting the appropriate monthly interval. See Sections 3.3.5 and 3.3.6 for full instructions.

The data can be downloaded to the user’s computer for examination outside the Poliscope Model, as required. See Section 3.3.1 for full instructions.

Output Produced:

The Crimes tab produces a “case volume analysis” graph which shows historic and forecast (projection) case volumes over the selected time frame.

3.6 FCC Incidents

Purpose:

In this tab, the user can explore the volume (with forecast) of control centre incident data with deployment allocations.

How to Use:

Step 1. Select the “FCC Incident” from the option menu.

Step 2. If required, select “Court Unique cases”.

Step 3. Select the ‘Attendance’ required.

Step 4. Select the ‘Incident priority’ required.

Step 5. Select the ‘Key word qualifiers’ required.

Step 6. Select any required “FCC Flags” from the radio/option button menu.

Step 7. If choosing to investigate volumes by teams:

  • Turn on the ‘Teams analysis?’ button.
  • Select the team structure required using the ‘FCC Team Selector’ radio option menu.
  • Choose the team of interest from the option menu.

Step 8. Select the “Forecasting thresholds” . See Section 3.3.3 for detailed instructions.

Step 9. Select the “Forecasting Model Components”. See Section 3.3.2 for detailed instructions.

Step 10. Where desired, simplify the time frame date using the “Manipulate time series values” options. See Section 3.3.4 for detailed instructions.

Step 11. Select ‘Run Forecast’ to generate the analysis.

The data time frame can be dynamically adjusted on screen by selecting the appropriate monthly interval. See Sections 3.3.5 and 3.3.6 for full instructions.

The data can be downloaded to the user’s computer for examination outside the Poliscope Model, as required. See Section 3.3.1 for full instructions.

Output Produced:

The FCC Incidents tab produces a “case volume analysis” graph which shows historic and forecast (projection) case volumes over the selected time frame.

3.7 Deployment Analysis

Purpose:

In this tab, the user can explore the underlying model used in workload simulations including deployment and associated resources and by incident type.

How to Use:

Step 1. Select the “FCC Incident” type from the option menu.

Step 2. Select the “Crime” type from the option menu.

Step 3. Select any ‘Incident priority’ required.

Step 4. Select any ‘Key word qualifiers’ required.

Step 5. Select any required “FCC Flags” from the radio/option button menu.

Step 6. If choosing to investigate deployment by teams:

  • Turn on the ‘Teams analysis?’ button.
  • Select the team structure required using the ‘Team Selector’ radio option menu.
  • Choose the team of interest from the option menu.

Step 7. Select the “Forecasting thresholds”. See Section 3.3.3 for detailed instructions.

Step 8. Select the “Forecasting Model Components”. See Section 3.3.2 for detailed instructions.

Step 9. Where desired, simplify the time frame date using the “Manipulate time series values” options. See Section 3.3.4 for detailed instructions.

Step 10. Select ‘Run Forecast’ to generate the analysis.

The data time frame can be dynamically adjusted on screen by selecting the appropriate monthly interval. See Sections 3.3.5 and 3.3.6 for full instructions.

The data can be downloaded to the user’s computer for examination outside the Poliscope Model, as required. See Section 3.3.1 for full instructions.

Output Produced:

The Deployment tab produces four graphs:

  • A “Resource time” graph which shows (and compares) ‘time on scene’ and ‘total resource time’ for two years (currently set to 2019 and 2021). The analysis shows the average time spent on scene for the each of the two years.

  • A “Number of staff deployed” graph which shows (and compares) how many staff members are attending for 2019 and 2021. The analysis also shows the average number of staff attending for the two years.

  • A “Resource time per selected incident” graph which shows the median time spent per incident over time.

  • A “% of selected incidents deployed” graph which shows the monthly average proportion of incidents for which staff was deployed over time.

3.8 Workload Dashboard

Purpose:

In this tab the user can both:

  • examine the underlying data including case volumes, time spent and outcomes over time and by team;
  • develop theoretical/forecasting scenarios to explore the impact by adjusting case volumes, resource, case outcomes and workload.

How to Use:

The main steps for using this tab are set out below.

Setting the baseline analysis:

A baseline analysis of the underlying data must first be carried out to form a reference for any comparative scenarios that is subsequently carried out

Step 1. Select the ‘FCC incident’ and/or ‘Crime’ from the option menus as required.

Step 2. Select any ‘Incident priority’ required.

Step 3. Select the ‘Key word qualifiers’, ‘FCC flags’ and / or ‘Crime flags’ from the option menus as required.

Step 4. Select the ‘Compact Missing Persons’ flag if the output required to show separately the results for missing persons.

Step 5. If choosing to investigate the data by teams:

  • Select the team structure required using the ‘Deploy Team selector’ and ‘Crime Team selector’ radio option menus.
  • Choose the team(s) of interest from the option menu.

If a forecast is required in the baseline:

Step 6. Select the “Forecasting thresholds”. See Section 3.3.3 for detailed instructions.

Step 7. Select the “Forecasting Model Components”. See Section 3.3.2 for detailed instructions.

Step 8. Where desired, simplify the time frame date using the “Manipulate time series values” options. See Section 3.3.4 for detailed instructions.

Step 9. Press the ‘Run Analysis’ button.

The ‘baseline’ analysis will be carried out and shown on screen.

To run a scenario:

With the baseline analysis now set, a comparative scenario can now be carried out.

Step 10. Select the ‘Scenario: explore impact of changes?’ tick box.

Step 11. Choose the date for the model to commence a scenario using the date slider.

Now, set the parameters to be applied to the data after the chosen start date:

Forecasting parameters:

Step 12. Select the “Forecasting thresholds”. See Section 3.3.3 for detailed instructions.

Step 13. Select the “Forecasting Model Components”. See Section 3.3.2 for detailed instructions.

Step 14. Where desired, simplify the time frame date using the “Manipulate time series values” options. See Section 3.3.4 for detailed instructions.

Volume and deployment parameters:

Step 15: Select ‘Volume and deployment parameters’ and adjust these as required.

Outcome parameters:

Step 16: Select ‘Outcome adjustments’ and adjust these as required.

Investigation timing parameters:

Step 17: Select ‘Investigation timings’ and adjust these as required.

Other timing parameters:

Step 18: Select ‘Other timings’ and adjust these as required.

With the parameters set, the scenario can be run and compared to the baseline analysis.

Step 19: Press the ‘Run Analysis’ button.

Note: If the parameters need to be reset, press the “Reset adjustments” button.

The scenario will be carried out and shown on screen.

The data time frame can be dynamically adjusted on screen by selecting the appropriate monthly interval. See Sections 3.3.5 and 3.3.6 for full instructions.

The data can be downloaded to the user’s computer for examination outside the Poliscope Model, as required. See Section 3.3.1 for full instructions.

Output Produced:

The Workload Dashboard tab produces a number of graphs which can be accessed through the drop-down menus shown below:

‘Workload results’ outputs include:

A basecase workload hours graph which shows the workload over the selected time frame for the base analysis.


A scenario workload hours graph which shows the workload over the selected time frame for the chosen scenario.

Note - Selecting the ‘Compare with base?’ tick box will show both the base analysis and scenario output together on the graph.


A workload hours detail core graph which shows a breakdown of the workload by outcome over the selected time frame.



A workload hours detailed other graph which shows a breakdown of the workload by key activities over the selected time frame.


‘Base case volume analysis’ outputs include:

A case volumes graph which shows historic and forecast (projection) case volume over the selected time frame.


A deployment times graph which shows historic deployment times (average minutes per person per case) over the selected time frame.


A outcome ratios graph which shows historic outcome ratios over the selected time frame.


3.9 Control Panel

In the Control Panel, users can adjust global settings.

Users can also update the internal Poliscope database importing STORM, NICHE and COMPACT.

3.10 Handbook and Help

Users can access this Handbook, Terms & Conditions, data information and licensing information.

3.11 Worked Examples

A number of examples have been developed to illustrate how to use Poliscope. Click on the link below to download or open in the browser

Download

Open in browser

3.12 Sense Checking Results

  • All models are simplifications of the real world. Modelled results will not match reality exactly but they should be good enough to help guide decisions.

  • The user should apply common sense when examining the modelled results asking appropriate questions such as:

    • Do the results make sense?
    • Are of the results of expected magnitude and direction (increasing / decreasing)?
    • Are the results surprising or in line with expectations?
    • Can the results be explained?
  • The user should experiment with the scenario by adjusting the variables of the scenario to understand the impact on the results; are they bigger / smaller / increasing / decreasing / changing rapidly, slowly or not at all.

  • The user should also experiment with other scenarios to sense check the relationships and assess the impact on the results.

  • The user should experiment with extreme values to see / better understand the effects.

  • The user should use the model to build up intuition and understanding about scale and sensitivity.

  • Continued use and updating of the model data and functionality will improve its utility.

4 Maintaining the Poliscope Model

4.1 Maintaining up-to-date data:

This Model relies on the following main data sources:

  • Data from the STORM system covering the period 1 January 2017 onwards capturing incidents handled through the Force’s control centre.

  • Recorded crime data from NICHE covering the period 1 January 2017 onwards.

  • Other data on missing persons, from the COMPACT system.

The Model typically loads data files ‘on demand’. This means key data files are loaded at the beginning. Others are loaded when a particular tab is selected. Data files are updated periodically.

The interface for updating the Model’s internal data sets is available in the Control Panel.

4.2 System Maintenance

  • The Model is built using the R statistical language and the ‘Shiny App’ framework.

  • While it provides many manipulations and analyses that meet most needs, it does not (and could not, realistically) handle every question or scenario.

  • The RStudio/R development platform provides an Integrated Development Environment that can be used to interrogate and manipulate the base data.

  • We recommend that Northamptonshire Police have available the R studio/R IDE platform, both to maintain the Model and to have the capability to carry out ‘free-form’ analysis of the incidents/crime and other relevant databases.



—-END OF DOCUMENT—-


  1. A full list is available here [xx]↩︎

  2. Northern Ireland and Scotland will be incorporated at a later date.↩︎

  3. The meaning of these terms and levels of geography covered in Environs are explained in detail in section [xx]. See also: Give ONS reference↩︎

  4. Indices of deprivation are covered in more detain in section [xx]. See also technical references at:↩︎

  5. A selection of drugs is used drawing on the British National Formulary categorisation and specific drug groups associated with the treatment of selected conditions e.g Attention deficit hyperactivity disorder (ADHD),

    , Psychoses, Obesity↩︎

  6. Northern Ireland and Scotland also have similar processess↩︎