07 Feb Research Pilots – evaluating and validating new technologies
Research pilots play a critical role in the testing, validation, and evaluation of new technologies before they are pushed into the market. They act as a feasibility study showing technology developers, end-users, and evaluators what is and is not working so that modifications and/or additional features can be incorporated to improve usability.
Research pilots should be much more than a Q&A session to allow participants to really interact and experience how the technology works.
Our experience highlights the benefits of designing research pilots in collaboration with users across multiple phases. This provides an opportunity for developers to incorporate the feedback gathered at an earlier stage and enables stakeholder involvement to evolve.
The role of research pilots in developing technical solutions
New technology solutions face tough odds along the path from concept to a successful market rollout – developers face the chance of failure at every turn. Project management methodologies have been reimagined many times to try to provide ever greater assurance of success. At the root of most efforts to ensure success are the touchpoints with the users who will ultimately be the judge of whether the solution meets their needs – and will make that judgment with their purchasing decisions.
Beyond usability, pilots provide a key opportunity to collaborate with end-users to establish and evaluate benchmarks for success that can demonstrate the impact and benefits of the technical solution, thereby providing evidence to the market of its value. In turn, this provides critical information for a convincing business case for wider sales and marketing efforts at the commercialisation stage.
While undertaking a research pilot requires effort and resources such as budget and stakeholder time, it will ultimately minimise the risk of failure when launching the technical solution. Launching a product that fails will not only waste the resources that have been spent on development, but also those needed to launch and market the solution. Furthermore, technology failure can damage an organisation’s reputation as recently demonstrated by TSB when launching their new IT system. The technology failure resulted in TSB losing 12,500 customers due to customers fraudulently losing money from their accounts. A review of the failure by IBM found that proper tests may not have been carried out before five million customers were transferred to a new TSB IT system.
What should a research pilot involve?
Here at Trilateral, we have identified key lessons from our experience on assessing different types of technical solutions. The nature of the pilot largely depends on the approach taken with software development (agile, waterfall model etc.).
For our data analytics work, such as within STRIAD, we favour the use of agile development sprints engaging directly with users of the intended solution. Benefits of these sprints include:
- Enabling co-design with greater opportunities for participatory processes of stakeholder engagement and with relevant in-house expertise (e.g., data science, social science and policing)
- Providing transparency throughout the process from design through to implementation (e.g., data protection considerations, ethics, prioritisation of features)
- Allowing for change and improving the quality of our solutions
- Focusing on business value by enabling our team to understand what matters most to our end-users
To achieve this, we use an interdisciplinary team of experts (e.g., technology developers, end-users, social scientists) across the pilot design and evaluation to ensure that the impact of the technology is understood from different viewpoints and perspectives. Through on-site testing and evaluation, our team can work with end-users to achieve the desired results.
While we use an agile approach within our own technology development activities, it is necessary to take a flexible approach across our wider research work, as different projects require different approaches to evaluation. As such, depending on the needs and approach in a given project, when designing and evaluating research pilots, our team draws on different research methods (e.g., interviews, workshops, participant observation, surveys) to engage with pilot participants.
It is important to host the pilot and evaluate the technology in the environment where it will be used. This approach was adopted by Trilateral when assessing a research pilot led by Lancashire Constabulary to test a Community Policing technology. To fully understand police, local authority, and citizen feedback on the usability of the proposed solution, Trilateral’s team shadowed pilot participants to collect eye-witness accounts of how the technical solution worked in the field and suggestions for improvement. By shadowing different types of end-user, Trilateral staff were able to understand the limitations and value the technology offered to each stakeholder group.
We will continue to build upon our work in designing and evaluating pilots in the EUNOMIA project, focused on providing a solution to address the challenges of disinformation and information verification in social media.
Trilateral will ensure that co-design principles are incorporated throughout the EUNOMIA project to define the problems and envisage the solutions, and will work with partners and end-users in the planning, execution and evaluation of the tools.
For more information contact our team:
Kush Wadhwa, Director at Trilateral Research
Hayley Watson, Practice Manager
Su Anson, Senior Research Analyst at Trilateral Research