Call Announced: February 2016

Application deadline: May 2016

Decisions to applicants: June 2016

Start/end date: June 2016 - Dec 2016

Funding available: Up to £40,000 to be divided amongst the selected proposals

Call Document

The submissions will be reviewed according to the parameters stated in the call:

  • Match to network remit

  • Engagement of early career academics

  • Realistic objectives within the constraints of funding and time

  • Multi-disciplinary and innovative science

  • Industrial support

  • Value in enhancing research activity

  • Alternative funding sources 

Outcome

After a thorough review process by the Management Team and Steering Committee, the following were awarded funding:

 

Christos Ellinas (University of Bristol)

Title: Project Resilience: Failure cascades and mitigation for activity networks

 

 

This study focuses on a recently introduced class of complex systems – projects – and on one particular abstraction – activity networks. An activity network captures tasks themselves (as nodes) and the functional dependencies between them (as links), supplemented by a temporal stamp denoting the scheduled start and end date of each task. As such, activity networks are increasingly susceptible to failure cascades, where the failure to deliver a task can subsequently affect its successors, resulting in a domino-like sequence of failures in a project’s delivery. With a desire for greater project resilience being a natural response to the emergence of such failures, a mitigation scheme is proposed, where tasks are shifted across time in order to shield them against an unravelling failure cascade.

When applied to data from real-world engineering projects, it is found that:

  1. a single failure can trigger large-scale failures, with latter being characterised by a heavy-tailed cascade size distribution, and

  2. project resilience can be achieved by delaying specific tasks, yet

  3. increased resilience requires an increasingly high number of changes to the original network, implying a high ‘cost’.

Insight 2 and 3 are particularly relevant in terms of practical applications, with future work focusing on balancing the two.

Michael Crosscombe, Jonathan Lawry, Sabine Hauert, Martin Homer (University of Bristol)

Title: Robust distributed decision-making in robot swarms: exploiting a third truth state

Reaching an optimal shared decision by applying only decentralised algorithms is a key aspect of many swarm robotic applications. For example, the best-of-n problem requires a multi-agent swarm to select the best choice from n mutually exclusive alternatives, based only on localised feedback and learning. In fact, this generic problem underpins a wide variety of distributed decision-making tasks. In many applications it is important for robot swarms to be robust to a variety of different types of noise as well as hardware and software failure. In particular, the lack of calibration and the use of low-cost hardware can sometimes cause catastrophic failure. The notion of robustness that concerns this study relates to “individual robots who fail in such a way as to thwart the overall desired swarm behaviour”. For the best-of-n problem, the desired swarm behaviour is that of  convergence to the best decision.

 

 

 

This study investigates the best-of-n problem in robot swarms, with an emphasis on fault tolerance and robustness and the effect of malfunctioning robots with the potential to disrupt this desired behaviour by making decisions on the basis of random beliefs.

 

 

This study proposes a three-valued model for belief updating in the best-of-n distributed decision-making problem, and compares it to the weighted voter model. Experiments were conducted using a realistic simulation environment, as well as on actual Kilobot swarms of 400 robots. The study investigates robustness to the presence of a particular type of malfunctioning agent, in which error results from a proportion of the population continually selecting their beliefs at random rather than as part of the belief updating process. The results from both sets of experiments agree that the three-valued model is more robust to the presence of malfunctioning or noisy individuals in the population than the weighted voter model, but that the weighted voter model has the advantage of converging more quickly to the best choice.

 

 

Edmund Barter, Thilo Gross (University of Bristol)

Title: Project Skyline

Diffusion maps can be thought of as a form of non-linear principal component analysis (PCA). by finding the manifold on which the main variation in the data takes place and hence the dimensionality of this manifold. In contrast to the linear PCA, diffusion maps can detect curved manifolds. The method is largely parameter-free such that no additional bias, besides what is contained in the data,is introduced in the analysis. The central aim of this study is to perform a non-linear principle component analysis of the UK census data to identify variables of cities that can be used in future dynamical models of resilience using diffusion maps.

 

 

The study demonstrates that difusion maps can be gainfully employed to analyse very large datasets. In particular, it is shown that the methodology can be used to identify variables that span the social fabric of cities. The analysis also suggested that for future urban modelling work out of over 1400 census variables a set of 12 may be sufficient. The study has also laid the groundwork for a PhD studentship starting September 2018. The results obtained has also enabled links with a private business (Ocean Lettings), think tank (Centre for Cities), and Bristol City Council.

 

Jospeh Fleming, Professor Alan Purvis (Durham University)

Title: Self repair of avionic systems using the Durham flight simulator and cellular automata

From aircraft manufacturers to insurers, the safety of aviation is of the highest priority with many stakeholders in the industry in maintaining profitability of the aviation business. Given the nature of flight, it is understandable that the systems on board the aircraft must be resilient in terms of both hardware and software; simply swapping out damaged components or parts during a flight is not feasible. Current redundancy methods in aircraft, for example, consist of separate systems for each pilot to read the current airspeed. On a more general level, examples of resilience within systems can come in the form of Triple Modular Redundancy or Quad Logic. Both methods require a large overhead of additional components, as well as monitoring software.

 

 

This study aims to use these healing methods to correct faults in the aircraft domain. By modulating an input signal to the system to simulate a fault, the corresponding relationships are monitored for their response. Through the study of these relationships the faulting variable can be isolated and then used to trigger a healing method. The conclusion is that both detection and healing of a fault can be achieved.

 

Thomas Barratt, Professor Alan Purvis (Durham University)

Title: Can FlightGear be used to find flight failures?

This project sets out to understand the methods in which FlightGear deals with failures in aircrafts, and to see whether these methods are compatible with a detection method.

 

 

In ¾ of the 4 properties used to try and identify flights with errors, 75% correctly identified a flight with an error. This is rather promising, and clearly shows that envelope detection could work through this method. As more flight logs are saved and become available, the algorithm will continually improve its ‘safe flight envelope’, so the more tests that are performed, the more reliable the result will become.

Neil Carhart (Durham University) 

Title: Investigating the relationship between engineering standards and resilience

Engineering standards and codes of practice are intended to ensure that products and processes conform to acceptable levels of safety. They provide a framework for design and construction that aims to reduce variability and uncertainty so that compliance results in predictably tolerable outcomes. However, they can undermine the adaptability of those using them and their products. This is essential for meeting changing socio-environmental needs, and for resisting and recovering from un-anticipated hazards.

This workshop sets out to identify areas of focus for future research in this area, through the identification of potential mechanisms by which codes of practice enhance or impair the capabilities associated with resilient systems. In order to do this the workshop employed Group Model Building approaches to the construction and critique of Causal Loop Diagrams.

The feedback loops that were identified by the participants emphasise the delicate balancing processes that often influence the resilience of a dynamic system within a constantly  changing operational context, while also providing some insight into how these processes could be managed.


 

 

Share the NEWS