RITICS Fest 2025

The Research Institute in Trustworthy Inter-Connected Cyber-Physical Systems (RITICS) was thrilled to announce the launch of an annual workshop series. The event offers a unique platform to showcase and discuss the latest advancements in the security of Industrial Control and Cyber-Physical Systems across the UK. 

Presentation Summaries

Operational technology (OT) bridges the physical and cyber worlds in critical sectors. It is natural, therefore, that asset owners seek assurance of their OT security. A common approach to IT security assurance is penetration testing, which intends to emulate the tactics, techniques, and procedures (TTPs) of real adversaries. However, like many OT security capabilities, penetration testing doesn’t translate directly from IT.

Drawing on a study carried out during a RITICS fellowship with practitioners and procurers of such services, we’ll describe current approaches to OT penetration testing. We’ll then briefly enumerate the challenges of penetration testing an OT environment and common flaws in current approaches, particularly when compared with contemporary OT attacks.
A major limitation in OT penetration testing is the failure to emulate real OT attacks, particularly the omission of a crucial tactic: process comprehension. While many OT penetration tests stop upon reaching the OT, declaring ‘game over’, we’ll demonstrate how the game has just begun. We’ll show how common perceptions of complex OT attacks are mostly science fiction, and how process comprehension is a requirement for such attacks to become reality.
Well describe how process comprehension can be safely integrated into existing OT penetration testing practices. We’ll explain when to apply it during an engagement while also addressing common concerns asset owners may have about its application. Finally, we’ll highlight the value process comprehension adds by showing that it not only contributes to a greater understanding of vulnerability in the process, but also clarifies the reality of potential threat scenarios and their impacts.
Industrial control systems are inherently deterministic – this stems from their association with mechanical plant that changes little throughout the asset lifecycle and the demands of real-time control and operation.
There is, however, a relaxation of deterministic behaviour as the PERA stack is traversed vertically from levels zero to two (i.e. between the interface that separates the cyber domain and the highly deterministic mechanical realm at one extreme, and the stochastic IT domain at the other).
Understanding the relationship between determinism and the needs of real-time control and the internal and the external factors that drive determinism (e.g. communication protocols, asset life cycles, the external environment within which cyber-physical systems operate, etc.) is essential in the field of IDSs since this dictates which approaches to anomaly detection are best suited to which parts of the cyber domain, and the appropriate set of metrics to measure these.
The session firstly sets out a new and unique abstracted framework that describes determinism in these terms and then applies it to suggest where and when it is appropriate to apply techniques such as machine learning, AI etc. in IDS systems and where these approaches are of no benefit or even counterproductive (or in the case of IPS potentially dangerous).

Cyber-resilience and the UK’s Security Strategy. The team at Innovate UK will present the latest funding opportunities and support available to researchers, startups and spinouts enabled by the 2025 Industrial Strategy and National Security Strategy. We will cover the current programmes like DSBD, CyberLocal and CyberASAP as well as the new programs that will be delivered by Innovate UK and UKRI. This may include early access to future competitions and forthcoming programmes from ourselves and DSIT.

Content is flexible as we are developing our plans across Government at the moment.

The Cyber Innovation Hub (Cardiff University) is developing cutting-edge operational technology (OT) cyber security test beds and training programmes to build practical skills and national resilience. Our test bed environments such as Fieldsite-in-a-Box, Purdue Wall, and Sensor-in-a-Box enable safe, realistic simulation of cyber-physical attacks on infrastructure including energy, water, and transport systems. These environments support red/blue team exercises, virtual and physical escape rooms, and sector-specific digital twins. Training is delivered through CPD-certified, modular courses across all levels covering topics from OT fundamentals to advanced incident response and leadership. All content aligns with industry standards like IEC 62443, ISO 27001, and NIST. This talk will present the test bed capabilities, course design approach, and outcomes from industry collaboration highlighting a scalable model for cyber-physical security education and research.

The idea behind this talk is to provide some insight into the kind of issues we encounter when we apply static analysis tools to the software component of safety-critical systems that have already been certified to a high level of safety integrity. For example, for a SIL 2 system in continuous operation, the probability of a dangerous failure per hour is expected to be between 10-6 and 10-7. One hundred years is 438,300 hours, so this corresponds to a roughly 50% chance of failure every 100 years. Bugs that only occur every 100 years are unlikely to be found during testing, so more formal methods of analysis are required.

Static analysis is a method of analysing the behaviour of a program from its source code, without actually running the program. Modern static analysers use a technique called abstract interpretation to compute the effect of each statement on the range of possible values that program variables can hold. The use of static analysis tools is now mandatory in some industrial sectors and we would expect safety-critical software to have been subjected to static analysis during its development. However, it is well known that different static analysers can produce different results, in part because of different trade-offs between accuracy and precision.

Our experience is with two tools from MathWorks called Polyspace Bug Finder and Polyspace Code prover. Both use abstract interpretation but make different trade-offs. Bug Finder checks for over 300 defects and is designed to be fast with few false positives. Code Prover checks for 30 critical run-time errors and aims to prove their absence with zero false negatives. Both tools typically produce a large number of findings that need to be reviewed manually and sentenced – the process is analogous to looking for a needle in a haystack, and the purpose of this talk is to describe some of the needles we have found and perhaps encourage researchers to devise better ways of finding such needles.

Formally verified OSes have been “the future” for over 15 years, so why haven’t they been adopted more? In this talk we will explore some of the history of microkernels, RTOSes, and software platform security for ICS/CPS, both verified and un-verified. We will outline some of the key challenges we have encountered through engagements with energy suppliers and healthcare providers, and what we see as the barriers to adoption.

The context of ICS/CPS development and their certification / accreditation requirements often create challenges and restrictions not encountered elsewhere. There are many good reasons why developers can’t just add random, untrusted code to critical systems (!), but what if we could? How could we become comfortable with that — and convince an accreditor — that our untrusted code cannot negatively impact a system’s security and safety-critical functionality? To have any chance of achieving this we need cast-iron guarantees of both separation and controlled information flow, and realistically we can’t build this without provable security (i.e., formal verification). Unfortunately, this isn’t the whole story: we have the high-level safety and security claims (e.g., “the system is secure”, “the system operates safely”), and our low-level proven guarantees of isolation from the OS, but making the link between these two is often less well fleshed out than we would like, or than is required by an accreditor.

We want to enable people without a background in formal methods to build real systems that take advantage of strong isolation, and which can demonstrate the link between automated proofs for an underlying microkernel, and the high-level safety case. We believe that taking advantage of this isolation will reduce the self-censorship (inadvertently) caused by certification bodies, and spur innovation in both academia and industry, while still being able to make the required security and safety cases. This talk will explore how far we’ve come, what has worked well, and what we see as the remaining challenges.

RITICS topics relating to this submission:
– Software Security in ICS/CPS specific environments.
– Design, Operation and Analysis of Systems for both Security and Safety.
– Assurance for both security and safety in ICS/CPS including the application of formal methods to CPS.
– Retrofitting Security to legacy systems.
– Resilience of ICS/CPS to adversarial attacks including system recovery and adaptation.

Unmanned Aerial Vehicles (UAVs) are increasingly deployed across critical sectors, yet remain vulnerable to GPS spoofing attacks that can compromise safety, control, and mission integrity. This presentation brings together two complementary studies addressing both offensive and defensive dimensions of GPS spoofing in UAVs. The first investigates a novel time-based GPS spoofing attack that manipulates MAVLink 2.0’s timestamp synchronisation protocol without requiring key recovery, enabling precise clock manipulation, replay attacks, and potential denial-of-service via timestamp overflow. Simulation and hardware-in-the-loop testing confirm the attack’s feasibility and highlight systemic vulnerabilities in constrained UAV communication protocols. The second study introduces a deep learning-based defence mechanism using a BiLSTM-Attention-CNN model, trained solely on GPS sensor data, to detect spoofed signals in real time. Implemented within a modified PX4-JMAVSim environment, the model outperforms traditional ML and DL approaches, demonstrating high precision and recall even under imbalanced data conditions. Together, these works expose critical attack vectors in civilian UAV systems and propose scalable, resource-aware mitigation strategies suited to real-world deployments.

This presentation aligns closely with RITICS FEST’s focus on advancing the cybersecurity of Industrial Control and Cyber-Physical Systems (ICS/CPS). UAVs represent a rapidly growing class of autonomous CPS deployed in sectors such as transport, infrastructure monitoring, and emergency response. The research addresses two key challenges: (1) the exploitation of protocol-level vulnerabilities through GPS-based timestamp spoofing in MAVLink 2.0, and (2) the development of a lightweight, AI-powered anomaly detection system to defend against such attacks. This work contributes to several key workshop themes, including threat intelligence, anomaly detection, AI applications in CPS security, and resilience to adversarial attacks. It also presents a novel simulation and test bed environment for GPS spoofing experimentation.

The second Bristol Industrial Control Systems Capture-the-flag (BrICS-CTF) was held on the 25th-27th June 2025. This event, funded by RITICS and the University of Bristol, is the only open entry CTF competitions with a focus on ICS in the UK. 42 participants over 11 teams from industry participated in the event. Participants first completed a half day of training on practical ICS hacking, followed by 2 days of the competition. In this talk, we will give an overview of the BrICS-CTF event and its outcomes. We will first briefly discuss our previous 2 events, the lessons learnt from those and how they influenced the design and organisation of the 2025 BrICS-CTF event. We will then talk about the 2025 event, including an overview of the systems and network, examples of challenges and a discussion of the results.

The presentation plans to introduce my research of a context-aware, AI-guided security framework tailored for Industrial Control and Cyber-Physical Systems. Existing approaches, largely adapted from enterprise IT, often treat data in isolation, rely on delayed analysis, and fail to reflect the operational urgency of ICS/CPS environments.These assumptions don’t hold in control environments that depend on real-time operation, safety, and tight coupling between digital and physical systems.

Through my research, I explore an alternative design: a context-aware, structured model that captures the relationships between people, processes, data, and assets in near-real time.
Rather than focusing solely on log correlation or static signatures, the approach builds a continuously updated behavioural model, enabling AI to surface meaningful deviations without disrupting operations. It draws on the principles of digital twins—not for simulation, but for live security monitoring. My aim is to support earlier detection, preserve forensic context, and integrate human decision-making into a security model that’s fit for critical infrastructure and modern CPS design.
The continuous monitoring of the interactions between cyber-physical components of any industrial control system (ICS) is required to secure automation of the system controls,
and to guarantee plant processes are fail-safe and remain in an acceptably safe state. Safety is achieved by managing actuation (where electric signals are used to trigger physical movement), dependent on corresponding sensor readings; used as ground truth
in decision making. Timely detection of anomalies (attacks, faults and unascertained states) in ICSs is crucial for the safe running of a plant, the safety of its personnel, and for the safe provision of any services provided. We propose an anomaly detection method
that involves accurate linearization of the non-linear forms arising from sensor-actuator(s) relationships, primarily because solving linear models is easier and well understood. Further, the time complexity of the anomaly detection scenario/problem at hand is lowered using dimensionality reduction of the actuator(s) in relationship with a sensor. We accomplish this by using a well-known water treatment testbed as a use case. Our experiments show millisecond time response to detect anomalies and provide explainability; that are not simultaneously achieved by other state of the art AI/ML models with eXplainable AI (XAI) used for the same purpose. Further, we pin-point the sensor(s) and its actuation state for which anomaly was detected. A water testbed is used in our experimentation for validation but the detection algorithms are in general sector agnostic (within the scope), and is transferable across other sectors where anomaly detection is a requirement.

The increasing autonomy and connectivity of transport systems has transformed vehicles into complex cyber-physical systems (CPS) with dynamic operational demands and interdependent control logic. However, traditional cybersecurity methodologies—rooted in component-centric threat models and rigid assurance frameworks—struggle to accommodate the nonlinear variability, emergent behaviours, and socio-technical interdependencies inherent in Connected and Autonomous Vehicles (CAVs).

This presentation introduces the research direction and early findings of my PhD, which seeks to develop an integrated approach to cybersecurity risk analysis and assurance in CAV ecosystems. At its core is a functional, scenario-based methodology that examines how threats propagate across cyber, physical, and human layers, and how functional degradation or system drift under attack conditions may compromise both operational safety and cyber integrity.
The framework supports impact analysis of adversarial attacks on safety-critical functionalities, such as perception, cooperative awareness, and decision-making modules—particularly under real-world constraints like uncertain environments, fallback dependencies, and degraded communication. It also facilitates assessment of resilience by modelling how system-level behaviour adapts (or fails to adapt) in response to anomalies, spoofing, or coordinated multi-vector threats.
Rather than treating cybersecurity assurance as a checklist of component-level protections, the research proposes a shift toward function-centred modelling, where risks are assessed in terms of how systems actually behave, not simply how they are designed. The presentation draws briefly on an illustrative use case involving tactical lane maneuvering under adversarial V2X message interference, showing how unexpected interactions can erode situational awareness, override safe-state triggers, or create ambiguity at the human-machine interface.
This work aligns directly with RITICS themes including resilience to adversarial attacks, combined cyber-physical-human security, safety-security assurance, and AI vulnerability in autonomous control systems. It aims to contribute to the ongoing evolution of assurance methods that are capable of reasoning about uncertainty, system complexity, and real-world operational risk—a critical need for the secure deployment of autonomous systems in national transport infrastructure.
Understanding the role of Human Factors in cyber security: A medical device example
Software and hardware cyber security controls can offer a great degree of protection against malicious attack. However, cyber security incidents are often attributed to ‘human error’, while social engineering or ‘disgruntled employee’ behaviour can overcome cyber security controls when specific individuals with the right capability and system access (including physical and digital access) have the motivation to compromise a particular asset. It is clear that technology alone is not enough, as the most advanced systems can be bypassed by certain users under certain conditions.
Human factors research and usability engineering are well established within safety, but within cyber security, they are an emerging theme; we propose that threat modelling and systems design incorporates these practices, relying on user research, cognitive science, task analysis, UI analysis and user testing to eventually increase our confidence in the overall dependability of critical systems.
We will present some of the key concepts stemming from human factors within safety, and explain how they are transferrable to cybersecurity. We will use a case study of a medical device within a hospital environment, illustrating how human factors can be formally considered within threat modelling and overall cyber risk assessment.
Space systems are indispensable to modern society and critical infrastructure, enabling global communications, navigation, Earth observation, and defense operations. Many sectors such as banking (timing via GPS), transportation (satellite navigation), and military command and control rely on space assets. Protec ing these systems from emerging threats is therefore paramount, as any disruption or compromise could have cascading effects on global stability and security. For example, a 2022 cyberattack on Viasat’s KA-SAT network disrupted satellite broadband across Ukraine andparts of Europe, demonstrating how attacks on satellites can cascade into widespread outages. This serves as stark evidence that space infrastructure is operating in an increasingly hostile threat environment, facing growing risks of sophisticated attacks and potentially catastrophic mission disruptions.
Historically, satellites were largely inaccessible to the attackers relying on security-by-obscurity, through proprietary protocols and physical isolation. However, the growing use of space systems, especially in the commercial sector, has led to an increased adoption of Commercial Off-The-Shelf (COTS) components, and the integration of emerging technologies (e.g. cloud, 5G, AI) into all segments of space systems.
Artificial intelligence (AI) and machine learning (ML) are being increasingly integrated into space systems to enhance their autonomy and to drive data-informed decision making. However, AI components can themselves become entry points for new types of attacks as adversaries might manipulate the data or models that space AI systems rely on, resulting in unpredictable or unsafe behavior. In effect, AI expands the attack surface with unique failure modes beyond traditional cybersecurity vulnerabilities.
This presentation introduces a comprehensive multi-dimensional threat taxonomy — the Space Threat Matrix — that captures the interplay between the expanding attack surface of space systems and novel risks posed by AI. This matrix systematically maps all plausible attack sectors across all layers of system, extending existing categorizations to include AI-specific threats. In addition to surveying classical cyber threats in the space sector, we examine two facets of AI’s impact on space security: AI-enhanced threats to space systems, where adversaries leverage AI capabilities to amplify or automate attacks (e.g., using AI for advanced reconnaissance or to create polymorphic malware); and threats to AI-based space systems, where the integration of AI/ML into satellites and ground stations introduces new vulnerabilities (e.g., adversarial attacks misleading an onboard machine learning model controlling a satellite). Our analysis provides a novel perspective for understanding how AI is reshaping the threat landscape for space systems, and addresses the current gap — the absence of accounting for AI-specific attack techniques in popular knowledge bases that are used for threat analysis in the space domain.

Modern cyber-physical systems (CPS), such as UAVs, next-generation fighter aircraft, and command-and-control (C2) platforms, integrate digital computation with physical processes to make mission-critical decisions in real time. These systems rely heavily on sensor data (e.g., GPS, pressure transducers, image processors), making them vulnerable to stealthy threats like False Data Injection (FDI) and sensor spoofing. These attacks manipulate input data while maintaining apparent operational normality, potentially leading to unsafe decisions without detection.

In this presentation, we introduce our project and initial results that aims to develop a novel verification methodology and corresponding toolchain to detect and mitigate such threats to CPS at the design time making the CPS resilient-by-design. Typically, CPS are modelled as hybrid systems, comprising discrete (cyber) and continuous (physical) components. The core technical innovation lies in modelling the verification problem as a delta-decision problem, solved using an extended SMT (Satisfiability Modulo Theories) solver.
Furthermore, the project aims to demonstrate methodology through the application of the prototype to a real-world industrial system (provided by our industrial partner – Evolution Measurement) that is used in flight test environment and is a true representative of a defence C2 system. Specifically, the project aims to test C2 operations that involve differential pressure scanner (e.g., P10-D) by estimating physical state for FDI vulnerabilities through modelling the system and evaluating the provided aerodynamics flight data by comparing the consistency between real-time “observations” (i.e., extracted from the collected data) and “predications” (generated by the C2 operational model). The tool will detect subtle discrepancies indicative of stealthy data manipulation with zero false alarms, outperforming conventional static, dynamic and AI-based techniques. Specifically, in case of any inconsistency, the tool produces a counter example with values that constitute the vulnerability. Such pressure sensors are widely used in C2 defence systems, e.g., missile and aircraft testing, battlefield environmental monitoring, and UAV and autonomous system applications, to name a few.

Under the pathway towards net-zero, the power system is undergoing rapid digitalisation to tackle the increasingly complex system dynamics resulting from the integration of inverter-based resources (IBRs). However, the cyber security posture of power systems is also escalating as evidenced by recent attack incidents such as the one hijacking hundreds of solar panels in Japan and growing number of Common Vulnerabilities and Exposures (CVEs) identified in renewable generation units. In parallel to the cyber security concern, the complicated and fast dynamics of newly integrated IBRs also pose potential risks to the power system stability, where the negative impedance characteristics exhibited by IBRs at lower frequency ranges can lead to sub-synchronous oscillations (SSO). A notable example demonstrating this was the August 2019 UK grid event, where insufficient damping of sub-synchronous oscillations within a wind farm, following a transmission system fault, directly contributed to a significant blackout event. Unlike traditional power grids driven by the physical behavior of synchronous machines, IBR-dominated grids depend on software-based control systems, increasing their exposure to cyber attacks. Despite emerging focus either on the cyber security enhancement or on the system stability assessment in renewable-dominated power systems, there still lacks research effort explicitly linking cyber security to system stability, which would be increasingly vital as cyber threats grow more sophisticated. Therefore, this paper aims to fill this research gap by demonstrating the cyber vulnerability and system impact of IBRs. Such vulnerability could be exploited through small-magnitude, difficult-to-detect triggering signals, leading to rapidly propagating oscillations across power systems, significantly compromising stability and potentially resulting in widespread blackouts.