The final description may miss detail or even fundamental aspects of the organization that may be available when one or few modalities are pushed to their limits.
Complex Systems Modeling for Obesity Research
Magnetic field tomography Ioannides et al. The capability of delineating contributions within distinct cytoarchitectonic areas leads to more refined analysis as in the example shown in Fig. Coincidence in spatial location and timing of activations in the brain. The demonstration of the coincidence in spatial location and timing of the earliest visually evoked top and spatial attention bottom related activations responses to images presented in the left visual field.
No matter how regional time series are derived, mathematical methods must then be employed to extract from each pair of time series a quantitative measure of the functional link between two brain areas, usually in two stages. First, one must define an appropriate measure of linked activity. For example using time-delayed mutual information as a non-linear measure enables identification and quantification of linkages between areas in real time for example in relation to an external stimulus or event and enables assessment of reactive delays.
The second stage of addressing the connectivity problem is the technical problem of using graph theory tools to put together the pair-wise links into a more global network. Specific problems can be tackled using a subset of the entire network through judicious choice of what cytoarchitectonic area to include and careful design of experiments. While empirical Neuroscience section Neuroscience deals with measuring and functionally interpreting connectivity on many scales, the aspects of Computational Neuroscience, which we address here, deal with structure-function relationships on a more abstract, aggregated level.
Generic models of network topology, as well as simple abstract models of the dynamical units, play an important role. In Computational Neuroscience the idea of relating network architecture with dynamics and, consequently, function has long been explored e.
On the level of network architecture, a particularly fruitful approach has been to compare empirically observed networks with random graphs. The field was revolutionized in the late s by the publication of two further models of random graphs: a model of small-world graphs Watts and Strogatz, uniting high local clustering with short average distances between nodes and a model of random graphs with a broad power-law shaped degree distribution Barabasi and Albert, Within Computational Neuroscience, the fundamental unit is typically defined as being individual neurons Vladimirov et al.
The discussion below focusses on the latter case. In contrast to the discussion in section Neuroscience the fundamental unit is not necessarily identified with the cortical areas, but is more flexible, allowing aggregates of cortical areas or even abstract ones derived from the raw data from fMRI to be the fundamental units that constitute the nodes of a network.
Such cortical areas can also be defined by anatomical means and neurobiological knowledge as for example in the cortical areas of the cortical areas network of the cat or the macaque; see Hilgetag et al. In Computational Neuroscience, SC refers to brain network connectivity derived from anatomical and other data, at the level of the fundamental unit.
FC refers to relationships among nodes inferred from the dynamics. Typical observables for FC are co-activations or sequential activations of nodes. A node can be excited active , refractory resting and susceptible, waiting for an excitation in the neighbourhood. Upon the presence of such a neighbouring excitation, a susceptible node changes to the active state for a single time step, then goes into the refractory state, from which it moves to the susceptible state with a probability p at each time step. Furthermore, spontaneous excitations are possible with a small probability f.
A more global perspective includes learning, i. In its simplest form, such a co-evolution of structural and FC is given by Hebbian learning rules Hebb, , where — qualitatively speaking — frequently used network links persist, while rarely used links are degraded. The co-evolution of SC and FC offers an interesting possibility for the overarching perspective of self-organization and emergent behaviours, as the system now can, in principle, tune itself towards phase transition points, maximizing its flexibility and its pattern formation capacities.
This concept is called self-organised criticality and goes back to the pioneering work by Bak et al. Phase transition points of a dynamical system are choices of parameters, which position the system precisely at the boundary between two dynamical regimes e. At such points a small change of the parameter value can induce drastic changes in system behaviour. As already suggested with the example of waves around hubs, the concepts of self- organization and pattern formation may provide a useful theoretical framework for describing the interplay of SC and FC.
Returning to network topology, a wide range of descriptors of connectivity is used in Computational Neuroscience and other disciplines. Another common quantifier of connectivity is the average degree i. Beyond these simple quantifiers, the connection pattern in a graph can be characterized in a multitude of ways, for instance via clustering coefficients Watts and Strogatz, , centrality measures Newman, and the matching index or topological overlap Ravasz et al.
Geomorphologists study the origin and evolution of landforms. Geomorphic surface processes comprise the action of different geomorphic agents or transporting media, such as water, wind and ice which move sediment from one part of the landscape to another thereby changing the shape of the Earth. Therefore, looking at potential sediment pathways connections and transport processes has always been one of the core tasks in Geomorphology.
Chorley and Kennedy, ; Brunsden and Thornes, However, since the beginning of the 21 st century connectivity research experienced a huge boom as geomorphologists started to develop new concepts on connectivity to understand better the complexity of geomorphic systems and system response to change. It is widely recognised that investigating connectivity in geomorphic systems provides an important opportunity to improve our understanding of how physical linkages govern geomorphic processes Van Oost et al. Connectivity further reflects the feedbacks and interactions within the different system components under changing conditions Beuselinck et al.
However, to date most - if not all - of the existing connectivity concepts in geomorphology represent a palimpsest of traditional system thinking based on general systems theory e. Landforms are the product of a myriad of processes operating at different spatial and temporal scales: defining a fundamental unit for the study of connectivity is therefore particularly difficult.
Geomorphologists have traditionally drawn structural boundaries between the units of study which are often obvious by visible sharp gradients in the landscape, for example channel-hillslope or field boundaries. This imposition of structural boundaries has led to the separate consideration of these landscape compartments, rather than looking at the interlinkages between them, which results in an incomplete picture when it comes to explain large-scale geomorphic landscape evolution.
Bracken et al. However, this framework provides no insight into how the fundamental unit may be defined. Its size and demarcation is highly dependent on i the processes involved and ii the spatial and temporal scale of study i. If, for example, the temporal scale of analysis is considerably greater than the frequency of key processes i. Alternatively, if the temporal scale over which sediment connectivity is evaluated is less than the frequency at which key sediment-transport-related processes within the study domain operate, then sediment connectivity will be perceived to be lower Bracken et al.
The size of a fundamental unit in Geomorphology is thus dependent on the underlying research question and may range from plot- e. However, geomorphic processes tend to vary between spatial scales, which leads to one of the key problems in geomorphology, i. Consideration of how fundamental units make up landscapes has a long history in geomorphology.
Vertically, the upper boundary of a geomorphic cell is defined by the atmosphere, while the lower boundary is generally formed by the bedrock layer of the lithosphere. Laterally, geomorphic cells are delimited from neighbouring cells with a change in environmental characteristics that determine hydro-geomorphic boundary conditions e.
In Geomorphology, SC describes the extent to which landscape units however defined are physically linked to one another With et al. An early consideration of functional interlinkages between system compartments i. However, besides a general notion of the importance of coupling between system components for landscape evolution the authors did not provide any further information on how to define and quantify these relationships. In Geomorphology it is becoming increasingly accepted that SC and FC cannot be separated from each other in a meaningful way due to inherent feedbacks between them Fig.
Geomorphic feedbacks between structural and functional connectivity. Schematic diagram of feedbacks between structural and functional connectivity source: Wainwright et al. Landscapes can be perceived as systems exhibiting a distinct type of memory, i. Thus, a critical issue when separating SC and FC is determining the timescale at which a change in SC becomes dynamic i.
Past geologic, anthropogenic and climatic controls upon sediment availability, for example, influence contemporary process-form relationships in many environments Brierley, such as embayments e. Hine et al. Poeppl et al. In most geomorphic systems the imprint of memory and the timescales over which feedbacks affect connectivity are too strong for a separation of SC and FC. However, this philosophical position has not yet made its way into approaches to measuring connectivity.
A challenge when developing quantitative descriptions of the structural-functional evolution of connectivity in geomorphic systems is thus how to incorporate memory effects. Furthermore, when distinguishing between SC and FC, the challenge is to achieve the balance between scientific gains and losses, further depending on the spatio-temporal scale of interest and the applied methodology. The conceptualization of landforms as the outcome of the interactions of structure, function and memory implies that landscapes are organised in a hierarchical manner as they are seen as complex macroscopic features that emerge from myriad microscopic factors processes which form them at different spatio-temporal scales Harrison, For example, river meander development e.
Church, or dune formation e. Baas, can be seen as emergent properties of geomorphic systems that are governed by manifold microscale processes e. In Geomorphology, emergence thus becomes the basis on which qualitative structures landforms arise from the self- organisation of quantitative phenomena processes Harrison, operating at a range of different spatial and temporal scales. In order to get a grasp on emergent behaviour of geomorphic systems recent advances in Geomorphology are based on chaos theory and quantitative tools of complex systems research e.
Coco and Murray, ; combined approaches: e. Combining numerical models with new data-collection strategies and other techniques as also discussed in 3. Murray et al. However, to date this hypothesis remains untested and is being subject to further inquiry. Yet, the potential appears to exist that connectivity may help to understand how geospatial processes produce a range of fluxes that come together to produce landscape form.
In Geomorphology, it is only possible to measure i the morphology of the landscape itself from which SC is quantified or ii fluxes of material that are a result of FC and event magnitude. Few standard methods exist to quantify FC directly Bracken et al. One of the key challenges to measure connectivity is to define the spatial and temporal scales over which connectivity should be assessed, which may depend on how the fundamental unit is defined.
Furthermore, data comparability is often constrained by the measurement design including the types of technical equipment involved. Changes in SC can be quantified at high spatial and temporal resolutions using several novel methods that have been developed or improved over the past years.
Structure-from-Motion SfM photogrammetry and laser scanning are techniques that create high-resolution, three-dimensional digital representations of the landscape. Sediment transport processes FC are traditionally measured using erosion plots for small-scale measurements to water sampling for suspended sediment and bedload traps in streams and rivers for large-scale measurements e. Recently, new techniques have been developed to trace and track sediment with higher spatial and temporal resolution.
Sediment tracers, which can either occur naturally in the soil or be applied to the soil, have been increasingly used to quantify erosion and deposition of sediments. Furthermore, laboratory experiments allow sediment tracking in high detail by using a combination of multiple high-speed cameras, trajectories and velocities of individual sand particles under varying conditions Long et al. However, it is highly questionable if measuring water and sediment fluxes provides sufficient information to infer adequately FC, since these data solely represent snapshots of fluxes instead of reflecting system dynamics incl.
Besides measuring landscape structure and sediment fluxes to infer connectivity, different types of indices and models are used. Connectivity indices mainly use a combination of topography and vegetation characteristics to determine connectivity Borselli et al. These indices are static representations of SC, which are useful for determining areas of high and low SC within the study areas. Because indices are static, they do not provide information about fluxes. Different types of models e. Landscapes are composed of interconnected ecosystems that mediate ecological processes and functions — such as material fluxes and food web dynamics, and control species composition, diversity and evolution.
The importance of connectivity within ecology has been recognised for decades e. Connectivity is now recognised to be an important determinant of many ecological processes Kadoya, including population movement Hanski, , changes in species diversity Cadotte, , metacommunity dynamics Koelle and Vandermeer, and nutrient and organic matter cycling Laudon et al.
For example, in marine ecology, identifying and quantifying the scale of connectivity of larval dispersal among local populations i. Regardless of the scale at which connectivity is defined within Ecology, there is nonetheless consensus that connectivity affects most population, community, and ecosystem processes Wiens ; Moilanen and Hanski Hierarchy theory provides a clear tool for dealing with spatial scale, and suggests that all scales are equally deserving of study Cadenasso et al.
It is therefore critical that the fundamental unit be defined clearly as well as relationships that cross scales Ascher, The fundamental unit is typically defined as being the ecosystem — a complex of living organisms, their physical environment, and their interrelationships in a particular unit of space Weathers et al. In this respect, an ecosystem can be a single gravel bar, a whole river section, or the entire catchment, or an ecosystem can be a plant, a vegetation patch, or a mosaic of patches, depending on the spatiotemporal context and the specific questions.
Hence, the ecosystem concept offers a unique opportunity in bridging scales and systems e. Notably, this definition of the fundamental unit is scale-free; therefore identifying the fundamental unit will emerge naturally out of the ecosystem s in question. Whilst an appropriate definition of the fundamental unit is critical in Ecology, this does not present a challenge, as the ecosystem provides a clear-cut definition that is applied ubiquitously. Ecology has long been concerned with structure—function relationships Watt, , and connectivity now tends to be viewed structurally and functionally Goodwin, , taking both structure and function into account often referred to as landscape connectivity; Belisle, Structural connectivity refers to the architecture and composition of a system Noss and Cooperrider, e.
Measurements of SC are sometimes used to provide a backdrop against which complex behaviour can be measured Cadenasso et al. Functional connectivity depends not only on the structure of the landscape, but on the behaviour of and interactions between particular species, or the transfer and transformation of matter, and the landscapes in which these species and processes occur Schumaker ; Wiens ; Tischendorf and Fahrig b ; Moilanen and Hanski Moreover, it is concerned with the degree and direction of movement of organisms or flow of matter through the landscape Kadoya, , describing the linkages between different landscape elements Calabrese and Fagan, In terms of animals, the FC of a depends on how an organism perceives and responds to landscape structure within a hierarchy of spatial scales Belisle, , which will depend on their state and their motivation which in turn will dictate their needs and how much they are willing to risk to fulfil those needs Belisle, Thus, the FC of a landscape is likely to be context and species-dependent e.
Pither and Taylor, Linking and separating SC and FC is challenging. Furthermore, riverine assemblages are governed by a combination of local e. There is empirical evidence that the position within the river network i. For example, in looking at the interacting effects of habitat suitability patch quality , dispersal ability of fishes, and migration barriers on the distribution of fish species within a river network, it has been found that whilst dispersal is most important in explaining species occurrence on short time scales, habitat suitability is fundamental over longer time-scales Radinger and Wolter, Hence, ignoring network geometry and the role of spatial connectivity may lead to major failure in conservation and restoration planning.
These legacy effects may consist of information e. These time lags in the functional response to changes in system structure can confound the ability to make meaningful separations between structure and function.
- Trisomy 13 - A Bibliography and Dictionary for Physicians, Patients, and Genome Researchers?
- Cooking in a Can: More Campfire Recipes for Kids (Activities for Kids).
- Modeling Multi-Level Systems.
- Design Abstraction;
- The Arrow and the Spindle: Studies in history, myths, rituals and beliefs in Tibet volume I.
- Modeling Multi-Level Systems pdf.
- Forget Me Not.
Emergent behaviour in Ecology is evident by the scale-free nature of ecosystems. Because ecosystems can be defined at any scale usually spatial rather than temporal , interactions across different hierarchical levels lead to emergent behaviour at a different scale too. A striking example of such emergent behaviour is the existence of patterns in vegetation, for example Tiger Bush MacFadyen , Clos-Arceduc However, although attempts to explain this phenomenon using advection-diffusion models e.
A more extensive critique of such approaches is given in Stewart et al. Based upon the argument that spatial patterns emerge in response to interactions between landscape structure and biophysical processes e. Turnbull et al. Evolutionary impacts of past processes, such as glaciations also shape emergent behaviour in Ecology, through separations and reconnection of larger areas even continents. Increases in physical connectivity of landscape patches also facilitate the invasion of non-native species which in turn may trigger long-term evolutionary processes for both native and non-native species e.
Mooney and Cleland, The challenge in Ecology is to overcome the highlighted methodological constrains to studying emergent behavior and develop approaches that truly allow for explorations of emergent behavior. Measuring SC tends to be based on simple indices of patch or ecosystem connectivity. Patch proximity indices are widely used e. Bender et al. Other structural approaches to looking at ecological corridors include landscape genetics, telemetry, least-cost models, raster-, vector- and network-based models, among many other methods, which offer unique opportunities to quantify connectivity see Cushmann et al.
Most metacommunity and metaecosystem studies apply lattice-like grids as landscape approximations, where dispersal is random in direction, and distance varies with species. However, many natural systems, including river networks, mountain ranges or cave networks have a dentritic structure. These systems are not only hierarchically organised but topology and physical flow dictate distance and directionality of dispersal and movement Altermatt , references therein.
Larsen et al. In a graph-based approach, patches or habitats or ecosystems are considered as nodes, which link pathways between these nodes. Most work in Ecology has focused on unweighted, one-mode monopartite networks Dormann and Strauss, Measuring FC requires dealing with complex phenomena that are difficult to sample, experiment on and describe synthetically Belisle, Approaches to measuring FC have the greatest data requirements, and include connectivity measures based on organism movement, such as dispersal success and immigration rate, with, for example, a high immigration rate indicating a high level of FC.
In a study on seven Forest Atlantic bird species, the SC-FC relation was explored using a range of empirical survey techniques Uezu et al. Quantitative analysis of landscape structure was carried out using a suite of SC measures. Functional connectivity measures were derived from bird surveys and playback techniques, carried out at snapshots in time and at discrete locations. Whilst these empirical measures allow insight to SC-FC relations, they nonetheless go hand-in-hand with a series of assumptions that allow the level of FC to be inferred.
Similarly, data on dispersal distances a proxy for FC also tends to be relatively sparse. For example, they have been collected for a small number of marine species Cowen et al. Sammarco and Andrews, ; Shanks et al. An ongoing challenge associated with empirically-based studies for assessing FC in Ecology is that they provide only a snapshot of dispersal or migration, representing only one possible movement scenario. It is generally accepted that it is impossible to measure empirically the full range of spatial and temporal variability in FC Cowen et al. Modelling approaches are being used increasingly to overcome the limitations of empirically-based approaches to measuring FC.
However, these modelling approaches are still limited by a paucity of available empirical data to verify the results of modelling experiments. The limitations of patch-based or landscape-based approaches to studying connectivity, and the prevalence of ecological research being carried out at increasingly larger scales has driven research in the direction of using network-based approaches e. Urban and Keitt, , often drawing on the concept of modularity from Social Network Science, Physics and Biology, and using network-based tools from Statistical Physics that account for weighted non-binary , directed network data e.
Fletcher et al. Progress has been made in developing network-based tools for analyzing weighted monopartite networks e. Clauset et al. In weighted networks, the links between two species may be quantified in terms of their functional connectivity; i. For example, in pollinator-visitation networks, pollinators interact with flowers, but pollinators do not interact among themselves Vazquez et al. A major challenge in using weighted bipartite networks in Ecology is that many of the analytical tools available require one-mode projections of weighted bipartite networks e.
Martin Gonzalez et al. Guillaume and Latapy, , meaning that potentially useful information of ecological connectivity is lost. However, tools are being developed to analyze weighted bipartite networks e. Dormann and Strauss, Multi-layer networks are increasingly being used in Ecology with the advantage over simpler networks that they allow for analysis of inter-habitat connectivity of species and processes spanning multiple spatial and temporal scales, contributing to the FC of ecosystems Timoteo et al.
Advances are being made in the analysis of multi-layer ecological networks, with the recent developments in the analysis of modular structure of ecological networks i. A recent study, for the first time looked at modular structure seed dispersal modules; i. A strength of using multi-layer networks in the analysis of ecological systems is that it allows differentiation of intra-layer and inter-layer connectivity within the multi-layer network Pilosof et al.
Whilst multi-layer networks are potentially a valuable tool for measuring connectivity in ecological systems, the application of such tools is often limited by the amount of system complexity that can be sampled and analyzed, potentially leading to an over-simplification of real ecological networks Kivela et al.
Social network scientists study the social behaviour of society including the relationships among individuals and groups. There is a long history of social network theory which views social relationships in terms of individual actors nodes and relationships links which together constitute a network. This history dates back to the development of the sociogram describing the relations among people by Jacob Moreno Later work by Leavit , White , Freeman , Everett , Borgatti , and Wasserman and Faust created a foundation of social theory frameworks based on network analysis.
In many cases the theory that was developed in understanding social systems was subsequently applied in fields such as ecology. Social scientists have continued to lead the development in key areas with the statistical analysis of motifs small building blocks found in networks Robins et al. In recent times the incorporation of ecological and social theory to facilitate socio-ecological analysis has expanded the social networks to include ecological systems Janssen et al. The focus on sustainability and resilience within these multifaceted networks continues to spawn novel solutions and advanced techniques Bodin and Tengo, ; Kininmonth et al.
Given that social network theory is often centred on the micro interaction of people there can be a convincing argument that the fundamental unit is the person Wasserman and Faust Certainly many published networks in sociology are based on the interaction history of people within a small group Sampson ; Zachary However with the advent of technology such as mobile phones, the internet, online gaming and social web pages i. Facebook this definition of the fundamental unit is less certain and some researchers now use the interaction itself as the unit of study Garton et al.
Ideas and behaviours that spread through a society known as memes Dawkins or the use of textual analysis Treml et al. From a network perspective the individual human is not represented by a single node in these cases but instead might have temporary links to the ideas and behaviours that are in circulation.
For example we are aware of the spread of technology, such as pottery styles across continents, but we remain unaware of the individuals involved. For many researchers the meso-scale focus on populations facilitates the analysis of organisational structures and their interactions Ostrom This hierarchical nature of social interactions has resulted in an increased emphasis on organisational culture as a defining influence on the social network Sayles and Baggio a.
Utilizing multi-layer networks to explore complex social theory promotes the conceptual possibility of combining fundamental units Bodin For example the management of natural resources across a region requires a functioning social network within the management agencies Bodin and Crona ; Kininmonth et al. However analysis of multi-layer networks that combine the fundamental units of organisation often with cultural attributes and individuals has demanded new methodological advances particularly in the interpretation of decision-making and engagement between the actors embedded within the associated organisation Sayles and Baggio a.
In this regard the analysis of the diverse suite of roles that actors and organisations portray is highly topical in understanding the long- and short-term dynamics of social systems. The development of social networks has primarily been based on observed interactions between members of a group and these interactions have been used to generate structural networks. These networks have then been used to determine the basis for subsequent events, such as a split in the group, based solely on the distribution of links Sampson ; Zachary For simple networks and simple events this approach appears to have merit, but when the networks become complex or highly dynamic this method is limited in terms of analytical power.
To bridge the link to a more functional approach requires understanding the processes happening at the individual level such that the links have meaning at a functional level. One solution here is to understand the functional meaning of simple network structures i. The powerful component is to try to recreate the larger network from the described frequency of specified motifs Robins et al. This approach has significant statistical power rather than just qualitative comparisons and can be useful for many research objectives Fig.
The difficulty with this method is translating the human response in an experimental setting, rather than real life, where the consequences are often of high impact. Phenomena such as Small World topology has highlighted the widespread effect of structure and function on the larger network dynamics Travers and Milgram Link prediction is also becoming widely used in social network studies to predict future interactions and the evolution of a network from the network topology alone e. Liben-Nowell and Kleinberg, The relationship within the common resource pool motif subset display across effective-complexity space.
This shows the various combinations of social interactions white that govern connected natural resources such as wetlands grey. From Kininmonth et al. Complicating the conceptual link between structure and function for social networks is the influence of culture. In particular, cultural norms are a strong influence on the responsiveness of social network structures such that different cultures are likely to generate different responses to identical network structures Malone ; Stephanson and Mascia Key to this influence is the human propensity for diverse communication methods that have inflated the effect of memory on the function of interaction networks.
This memory effect is also likely to affect the individual response following a repeat of the social interactions. Members of society will respond and interpret particular interactions differently based on their age group and background and this is evident in the expansion in computer-assisted social networks often binding diverse community groups Garton et al. The complexity that an evolving mix of cultures brings to the analysis of social networks is a significant challenge to providing a general set of rules of social engagement across the planet.
The emergent behaviours observed within social networks has spawned many significant publications from the splitting of monks at an abbey Sampson to the smoking habits of the general population derived from friendship clusters Bewley et al. The resilience of social systems is now seen as a direct response to the topological structure such as small world or scale free Holling The translation of the resilience concept from a structural perspective involves maintaining the integrity of the network, despite this being difficult to predict or measure. Methods that impose a process on the nodes and links such as Susceptibility-Infection-Resistance for disease propagation can be highly dependent on density and centrality measures.
The emergence of the network property such as resilience or effectiveness is conditional on the entire network interactions.
Original Research ARTICLE
To complicate matters further, the challenge of adopting models of social behaviour that recognise the diversity of social interactions across a population remains elusive. Network diagram of the interaction of fishers with people who buy fish. This network diagram highlights the emerging property of organised fishing businesses that are dependent on the access to capital. From the early research efforts of Moreno came the visual analysis of social networks using the depiction of people and interactions as nodes joined by links.
Gradually the application of mathematics defined the various patterns observed. In particular, the work by Harary set up the foundation of structural analysis of social networks. The advent of fast computing was necessary to enable more dynamic analysis including the evaluation of networks against non-random networks.
Centrality and link density measures formed the basis of many actor-level analytical tools Garton et al. The topological configurations that influence network function were incorporated into the analytical framework. Motif analysis Robins et al. This technique is still restricted in the configurations able to be utilised for analysis.
The greatest challenge in the field of social network analysis is the extension of the analytical techniques to encompass the postulations of the socioecological paradigm. Understanding the heterogeneous networks across hierarchical systems within dynamic structures remains a subject of rapid development Leenhardt et al.
Measuring connectivity in the social sciences is difficult due to ethical, practical and philosophical issues. These influences are found when collecting the data that describes the connectivity Garton et al. Questionnaires that seek to record a range of social interactions are hampered by privacy i. Ethical considerations mean the use of publically collected data must remain anonymous and limited to the case in question. Tricking individuals to react through the use of physiological experiments can be fraught with danger as Stanley Milgram demonstrated.
Another complication is the practical issue of who and the organisation they represent can conduct the interviews since people will respond differently to the type of person asking the questions based on their past interaction history or interview context Garton et al. The alternative is collecting large data volumes on connecting behaviour such as mobile phones but this is limited to the numerical ID of the caller rather than a fully described demographic suite.
In some cases the use of synthetic populations Namazi-Rad et al. Philosophical considerations are required to understand the complex human responses to simple observations of connections. Applying a Marxist rather than a Durkheimian perspective will lead to different interpretations of the observed changes in social network structure Calhoun However caching the network analysis in a particular school of thought is a powerful mechanism to reduce the vagueness of fundamental descriptions.
There are a number of important similarities in the way that the concept of connectivity is approached and the tools that are used within the disciplines explored. Notably though, there are also significant differences, which provides an opportunity for cross-fertilization of ideas to further the application of connectivity studies to improve understanding of complex systems.
This section i evaluates the key challenges by drawing upon differences in the ways they are approached across the different disciplines Table 1 , enabling ii identification of opportunities for cross-fertilization of ideas and development of a unified approach in connectivity studies via the development of a common toolbox. We then iii outline potential future avenues for research in exploring SC-FC relations.
Within all the disciplines explored the fundamental unit employed in any connectivity analysis depends on the spatio-temporal context of the study and the specific research question — this applies even where a clear fundamental unit might be self-evident e. The spatial and temporal scale of the fundamental unit may span orders of magnitude within a single discipline, and may thus have to be redefined for each particular study.
For example, whilst for some applications in Neuroscience it is appropriate to adopt the neuron as the fundamental unit, for others the cortical area many orders of magnitude larger in size may be more appropriate — notably in cases where it becomes challenging to address adequately the connectivity of neurons due to computational limitations. This issue is also present in Geomorphology where adopting individual sediment particles as the fundamental unit would become too computationally demanding.
In this sense, there are parallels between connectivity and the field of numerical taxonomy Sneath and Sokal, where, despite the obvious taxonomic unit being the individual organism, an arbitrary taxonomic unit termed an operational taxonomic unit was employed. The exception to this general statement is the field of Ecology, where the ecosystem provides a conceptual unit that can be applied at any spatial scale. The concept of the ecosystem was introduced by Tansley and has been subject to much debate since.
Despite the shortcomings of the ecosystem concept, within connectivity studies it is nonetheless useful to have an overarching concept that can be employed at any scale. The ecosystem concept is particularly useful when the interactions connectivity between different organizational levels are of interest, with an ecosystem at a lower hierarchical level forming a sub-unit of an ecosystem at a higher hierarchical level. Many systems are hierarchically organised, and therefore a key question for other disciplines is whether identifying something theoretically similar to the ecosystem concept may be useful.
For many applications in connectivity studies, appropriate conceptualisation and operationalisation of the fundamental unit will depend on the purpose of investigations. For example where interventions within a system have the goal of managing or repairing a property of that system, the scale of the fundamental unit may be specified, to work within the certain system boundaries for a particular purpose. But as noted in the case of Ecology section Defining the Fundamental Unit it is critical that whilst defining the FU, relationships that cross scales are also defined clearly.
Although in most disciplines the fundamental unit corresponds to some physical entity, in Social Network Science for example, it may be more abstract, i. More abstract conceptualisations of the fundamental unit may be fruitful in other disciplines where the definition of a fundamental unit as a physical entity has proved difficult e. Geomorphology , or in modelling approaches to examining connectivity e. Furthermore, the notion in Systems Biology that the fundamental unit is a concept dependent upon the current state of knowledge of the system under study is a valuable point that merits wider consideration.
There is general consensus that SC is derived from network topology whilst FC is concerned with how processes operate over the network. In all the disciplines considered, the separation of SC from FC is commonplace, due to the ease with which they can be studied separately — especially in terms of measuring and quantifying connectivity. The success separating SC and FC in Systems Biology has been attributed to the fact that the structural properties and snapshots of biological function are typically measured in independent ways, whereas elsewhere it is common for FC to be inferred from measurements of SC.
For example, in Geomorphology it is well established that structural-functional feedbacks drive system evolution and emergent behaviour, and whilst it is common in some applications to explore these feedbacks e. There is a similar tendency in Neuroscience to focus on structural-functional interactions rather than the full suite of reciprocal feedbacks between structure and function. However the increasing recognition within Geomorphology and Neuroscience of reciprocal feedbacks is heightening the need for additional tools that will allow the evolution of SC and FC and the development of emergent behaviour to be understood more fully.
The importance of such feedbacks is highlighted in Computational Neuroscience, in the case where frequently used networks persist, whilst rarely used links are degraded leading to the development of network topology over time. Nevertheless, separating SC and FC does permit insights into the behaviour of systems insofar as it permits predictive models of function from structure that are amenable to experimental testing.
The ease and meaningfulness with which SC and FC can be separated will also depend on the timescale over which feedbacks occur within a system. Structural connectivity can only be usefully studied independently of FC if the timescale of the feedbacks is large compared to the timescale of the observation of SC. Any description of SC is merely a snapshot of the system. For that snapshot to be useful it needs to have a relatively long-term validity. Thus, for meaningful separations of SC and FC to be made, it is paramount to know how feedbacks work, the timescales over which they operate, and how connectivity helps us to understand these feedbacks.
There are striking examples from several of the disciplines explored here of the ways in which feedbacks between SC and FC can lead to the co-evolution of systems towards a phase transition point — this is seen in Computational Neuroscience, and in Ecology and Geomorphology where system-intrinsic SC-FC feedbacks shift a system to an alternate stable state. Linked to SC-FC relations and the validity of separating the two is the concept of memory. Memory is about the coexistence of fast and slow timescales. Qualitatively speaking, the length of distribution cycles in a graph can be viewed as being related to a distribution of time scales.
Changes to SC in response to functional relationships imprint memory within a system. Thus, key questions are: How far back does the memory of a system go? Is memory cumulative? In systems subject to perturbations possibly true for all discipline studied here which perturbations control memory and its erasure? What are the timescales of learning in response to memory?
Other disciplines have similarly struggled to comprehend the instantaneous non-linear behaviour of their systems in terms of memory. In Social Network Science it is possible to speak of culture, which raises the notion of a hierarchy of memory effects on connectivity: one that has not yet been explored. Of all the key challenges facing the use of connectivity, memory appears to be one which no discipline has yet resolved.
Emergence is a characteristic of complex systems, and is intimately tied to the relationship between SC and FC. In this sense, a fundamental unit is an emergent property of microscopic descriptions. An important question is how far does the analysis of connectivity help understand emergence? As noted in section Understanding emergent behaviour , the co-evolution of SC and FC offers an interesting possibility for the overarching perspective of self-organization and emergent behaviours, as the system now can, in principle, tune itself towards phase transition points. Thus, by separating SC and FC in our analyses of connectivity, we remove the opportunity to understand and to quantify emergence — to understand how a system tunes itself towards phase transition points and the role of external drivers.
Without tools that can deal with SC and FC simultaneously, it is challenging to see how connectivity can be used to improve understanding of emergent behaviour. However, some suitable tools do exist. For example, adaptive networks that allow for a coevolution of dynamics on the network in addition to dynamical changes of the network Gross and Blasius, provide a powerful tool that have potential to drive forward our understanding of how connectivity shapes the evolution of complex systems.
Approaches are used in Computational Neuroscience that look at the propagation of excitation through a graph showing waves of self-organization around hubs, thus allowing exploration of conditions that lead to self-organised behaviour. However, even in this example, there is still great demand for new ideas that will more easily accommodate the study of memory effects in all its various guises and emergent properties. In Geomorphology and Ecology, key studies demonstrate how incorporating SC and FC into studies of system dynamics allows for the development of emergent behaviour e.
Stewart et al. However, such examples are relatively rare, which highlights the scope for trans-disciplinary learning which may help to drive forward our understanding of emergent behaviour. Link prediction is a potentially useful tool that has been applied for example in Systems Biology and Social Network analysis. It can be used to test our understanding of how connectivity drives network structure and function Wang et al.
If a comprehensive understanding of a system has been derived of the SC and FC of a network and their interactions, then we should be able to predict missing links Lu et al. Lu et al. Thus, prediction and network inference — even though blurring the distinction between SC and FC see section Neuroscience — can be used to identify the most important links in a network — i. In view of the widespread adoption of the concept of connectivity it may seem surprising that actually measuring connectivity remains a key challenge. However, such is the case. Because connectivity is an abstract concept, operationalizing models into something measurable is not straightforward.
The imperative here is to consider SC and FC separately. For the former, some disciplines e. Geomorphology, Ecology have developed indices of connectivity e. Furthermore, there is a concern as to what is the usefulness of such indices, other than as descriptions of SC: as might equally be said of clustering coefficients and centrality measures. Systems Biology, on the other hand, does not attempt to measure SC per se , but infers SC based on knowledge accumulation of the system.
In that sense, connectivity may be seen as a means of describing current understanding. Neuroscience, in contrast again, measures connectivity directly through experimentation. How far such an approach could be applied in other disciplines raises the issue of ethics, as discussed in 3. Only in the case of Computational Neuroscience, which deals with analysed entities the properties of which are defined a priori is measuring SC straightforward. Of the two, FC poses the greater measuring problem.
Without such a description, FC can be derived from fluxes e. Link prediction is also a potentially useful tool in deriving a network-based abstraction of a system where it is infeasible to collect data on SC and FC required to parameterise all links, or where links, by their very nature, are not detectable Cannistraci et al.
This problem of observability is inherent in Systems Biology where link types can be very diverse and it has already been noted that databases will drift in time. Therefore, the topological prediction of novel interactions in Systems Biology is particularly useful Cannistraci et al. The use of link prediction also raises the possibility that data can be collected to represent the subset of a network therefore reducing data collection requirements , and link prediction be used to estimate the rest of the network Lu et al.
Separate, but directly linked to measuring connectivity, is analysis of the measurements. The most commonly applied approach is the use of graph theory. This powerful mathematical tool has yielded significant insights in fields as diverse as Social Network Science, Systems Biology, Neuroscience, Ecology, and Geomorphology. However, in many applications of network-based approaches simply knowing if a link is present or absent i. This issue can be dealt with by providing a more detailed representation of the network using weighted or directional links. The use of weighted links is common within network science see for example Barratt et al.
Masselink et al. Using a weighted network can provide an additional layer of information to the characterisation of a network that carries with it advantages for specific applications, and to ignore such information is to throw out data that could potentially help us to understand these systems better Newman, a ; hence, the importance of using measures that incorporate the weights of links Opsahl and Panzarasa, More recently, further advances have been made in network-based abstractions of systems, for example, in Ecology, multi-layer networks are being increasingly used, which overcome the limitations of mono-layer networks, to allow the study of connections between different types or layers of networks, or interactions across different time steps.
Similarly, bipartite networks have been used to provide a more detailed representation of different types of nodes in a network. These more complex network-based approaches carry with them advantages that a more detailed assessment of connectivity within and between different entities can be assessed. However, whilst there are many advantages in using more complex network-based abstractions of a system weighted, bipartite and multi-layer networks , there are also inherent limitations as many of the standard tools of statistical network analysis applicable to binary networks are no longer available.
In the case of weighted networks, even the possibility of defining and categorizing a degree distribution on a weighted network is lost. In some cases there are ways to modify these tools for application to weighted networks, but one loses the comparability to the vast inventory of analysed natural and technical networks available. A further problem of assigning weights to network links is that it requires greatly increased parameterisation of network properties, which may in turn start to drive the outcome of using the network to help characterise SC and FC and may influence any emergence we might have otherwise seen.
However, in recognition of not throwing away important information associated with the weights of links, there are increasingly tools available to deal with weighted links, including: the revised clustering coefficient Opsahl and Panzarasa, ; node strength the sum of weights attached to links belonging to a node Barratt et al. As already discussed in the case of Ecology, a limitation of bipartite networks is that to analyze these networks, a one-mode monopartite projection of the network is required, as many of the tools available for monopartite networks are not so well developed for bipartite networks.
An important issue when analyzing bipartite networks is therefore devising a way to obtain a projection of the layer of interest without generating a dense network whose topological structure is almost trivial Saracco et al. Potential solutions to this issue include projecting a bipartite network into a weighted monopartite network Neal, and only retaining links in the monopartite projection by only linking nodes belonging to the same layer that are significantly similar Saracco et al.
A further issue is that it is often not possible to recover the bipartite graph from which the classical form has been derived Guillaume and Latapy, Developments are being made in our ability to analyze bipartite networks directly; for example, progress has been made in developing link-prediction algorithms applicable to bipartite networks e. Cannistracti et al. Similarly, to apply standard network techniques to multi-layer networks requires aggregating data from different layers of a multi-layer network to a mono-layer network De Dominico et al.
Careful consideration of the most appropriate tools is thus required when measuring connectivity using a network-based abstraction. Can a sensible projection of a bipartite network be derived, to facilitate analysis of the network? Is it possible to derive a monoplex abstraction of a multiplex network without losing too much information?
From this review it clear that the persistence of the four key challenges identified depends on the availability of different types of tools and their varied applications across the disciplines Table 1. Notably, disciplines that are more advanced in their application of network-based approaches appear to be less limited by the four key challenges.
The conceptual similarities in SC and FC observed between the disciplines discussed here, in which a wide range of different types of systems can be represented as nodes and links Fig. This common toolbox can be employed across the different disciplines to solve a set of common problems.
Network-based approaches drawing upon the tools of Graph Theory and Network Science reside at the core of this common toolbox as they have been applied in disciplines where the key challenges pose less of a problem. Network-centred common toolbox. Diagram showing how a network-centred common toolbox implicitly addresses the four inextricably linked key challenges: defining the fundamental unit, separating SC and FC, understanding emergent behaviour and measuring connectivity.
Groups of nodes form fundamental units at higher levels of organization denoted by grey dashed lines ; B. Topological representation of system structure spatially embedded depending in the system in question ; C. Identifying parts of the network that are dynamic functionally connected ; D.
Adaptive network where the evolution of topology depends on the dynamics of nodes source: Gross and Blasius; Network adaptation at multiple cross scale levels of organization shapes emergent behaviour; E. FC may have an emergent aspect self-organised, collective patterns on the structural network that is independent of network adaptation; F. The fundamental unit should dictate the measurement approach; G. How we measure connectivity determines our ability to detect how connectivity leads to emergent behaviour.
A common toolbox requires that tools are readily accessible. The widespread uptake of the tools of Graph Theory has been facilitated by the implementation and dissemination of various graph theoretical models. Facilitating this uptake is the freely available stand-alone open source packages or enhanced parts of more general data analysis packages, all of which are becoming more sophisticated with time.
A common toolbox can draw upon many existing freely available tools. One example is the Brain Connectivity Toolbox Rubinov and Sporns, which was developed for complex-network analysis of structural and functional brain-connectivity data sets using the approaches of graph theory. More recently this toolbox has been used to investigate braided river networks Marra et al. Continued knowledge accumulation.
This enables the fundamental unit to be defined based on the system in question, which is then represented within the network as a node. To deal with multi-scale dynamics within a system, groups of nodes at one level of organization can form a fundamental unit at a higher level of organization.
Complex Systems Modeling
Network-based approaches. These are well suited to the separation of SC and FC through the topological representation of system structure SC and through identifying parts of the network that are dynamic FC. The spatial embeddedness of many networks is an essential feature, whereby the location of nodes and their spatial proximity is an important feature of the system, and it is necessary that this be accounted for.
Further, the position of nodes within a network or node characteristics may alter the relative weighting of links. Accounting for network adaptation. In recognition that SC-FC relations evolve potentially leading to emergent behaviour , accounting for network adaption, where the evolution of network topology depends on node dynamics, is essential Gross and Blasius, Only by dealing with network adaptation can SC-FC feedbacks and interactions be dealt with.
Also important for understanding emergent behaviour is the capacity for fundamental units to be represented at multiple levels of organization, since this is critical where emergent behaviour is the result of cross-scale interactions and feedbacks. Whilst connectivity research in complex systems should not be restricted to the use of a single tool or approach, there are clearly advances that can be made in connectivity studies by merging tools used within different disciplines into a common toolbox approach and learning from examples from different disciplines where certain challenges have already been overcome.
It is important to recognise that not all the tools of the common toolbox will be applicable to all applications in all disciplines, and that some disciplines will only require a subset of approaches. Furthermore, it is important not to overcomplicate analyses, for instance through the use of spatially embedded networks where space is not an important network characteristic, or through the use of weighted links in cases where this is not critical to the representation of a system. Overcomplicating network representation reduces the scope for some network-based metrics to be used to quantify connectivity e.
To operationalise this common toolbox, what is required now is a transdisciplinary endeavour that brings together leading scholars and practitioners to explore applications of connectivity-based tools across different fields with the goal of understanding and managing complex systems. Examples include: i determining how critical nodes shape the evolution of a system and how they can be manipulated or managed to alter system dynamics; ii deriving minimal models of SC and FC to capture their relations and identify the most relevant properties of dynamical processes, and iii to explore how shifts in network topology result in novel systems.
Key to fulfilling this goal will be: synthesising theoretical knowledge about structure-function connectivity SC-FC relationships in networks; exploring the ranges of validity of SC-FC relationships and reformulating them for usage in the application projects; deriving suitable minimal abstractions of specific systems, such that the tools within the common methodology become applicable. Also important will be the synthesis of distinct methods that are similar in terms of the theoretical basis and share common ways of quantitatively describing specific aspects of connectivity.
An important task will be to test the applicability, compatibility and enhancement of consistent methods in the common toolbox from one discipline to the other. Then, using the common toolbox, it will become possible to explore and understand commonalities in the structure and dynamics of a range of complex systems and hence of the respective concepts that have been developed across scientific disciplines. In addition to these findings, other areas that may yield novel insights into SC-FC relations and assist in understanding commonalities in the structure and dynamics of a range of complex systems can be highlighted.
Examples include:. Estimating the importance of certain network components using the elementary flux mode concept. The importance of certain network components has been demonstrated in Systems Biology, but there are opportunities for all disciplines using network-based approaches to identify which parts of systems networks are particularly important. In Systems Biology elementary mode analysis is used to decompose complex metabolic networks into simpler units that perform a coherent function Stelling et al.
Thus, there is opportunity to extend the concept of elementary mode analysis to other disciplines to predict key aspects of network functionality. Citation: Webster AJ Multi-stage models for the failure of complex systems, cascading disasters, and the onset of disease. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. There was no additional external funding received for this study. Competing interests: The author has declared that no competing interests exist.
Complex systems such as a car can fail through many different routes, often requiring a sequence or combination of events for a component to fail. The same can be true for human disease, cancer in particular [ 1 — 3 ]. For example, cancer can arise through a sequence of steps such as genetic mutations, each of which must occur prior to cancer [ 4 — 8 ]. The considerable genetic variation between otherwise similar cancers [ 9 , 10 ], suggests that similar cancers might arise through a variety of different paths.
Multi-stage models describe how systems can fail through one or more possible routes. Here we show that the model is easy to conceptualise and derive, and that many specific examples have analytical solutions or approximations, making it ideally suited to the construction of biologically- or physically-motivated models for the incidence of events such as diseases, disasters, or mechanical failures. Jaynes [ 13 ] generalises to give an exact analytical formula for the sums of random variables needed to evaluate the sequential model.
This is evaluated for specific cases. The approach described here can incorporate simple models for a clonal expansion prior to cancer detection [ 5 — 7 ], but as discussed in Sections 8 and 9, it may not be able to describe evolutionary competition or cancer-evolution in a changing micro-environment without additional modification. More generally, it is hoped that the mathematical framework can be used in a broad range of applications, including the modelling of other diseases [ 15 — 18 ].
Imagine that we can enumerate all possible routes 1 to n by which a failure can occur Fig 1. In words, if failure can occur by any of n possible routes, the overall hazard of failure equals the sum of the hazard of failure by all the individual routes. A few notes on Eq 2 and its application to cancer modelling. Due to different manufacturing processes, genetic backgrounds, chance processes or exposures e.
Secondly, the stem cell cancer model assumes that cancer can occur through any of n s equivalent stem cells in a tissue, for which Eq 2 is modified to,. So a greater number of stem cells is expected to increase cancer risk, as is observed [ 21 , 22 ]. As a consequence, many cancer models implicity or explicitly assume and , a limit emphasised in the Appendix of Moolgavkar [ 14 ]. Often failure by a particular path will require more than one failure to occur independently.
Consider firstly when there are m i steps to failure, and the order of failure is unimportant Fig 2. The probability of surviving failure by the i th route, S i t is, 4 where F ij t is the cumulative probability distribution for failure of the j th step on the i th route within time t. It may be helpful to explain how Eqs 1 and 4 are used in recently described multi-stage cancer models [ 23 — 25 ]. Similarly, the probability of a given stem cell having mutation j is.
This is the solution of Zhang et al. Therefore, in addition to the models of Wu and Calabrese being equivalent cancer models needing m mutational steps, the models also assume that the order of the steps is not important. This differs from the original Armitage-Doll model that considered a sequential set of rate-limiting steps, and was exactly solved by Moolgavkar [ 14 ].
Eqs 8 and 9 are equivalent to assuming: i equivalent stem cells, ii a single path to cancer, iii equivalent divisions per stem cell, and, iv equivalent mutation rates for all steps. Despite the differences in modelling assumptions for Eq 9 and the Armitage-Doll model, their predictions can be quantitatively similar. This approximate solution is expected to become inaccurate at sufficiently long times. An equivalent expression to Eq 8 was known to Armitage, Doll, and Pike since at least [ 26 ], as was its limiting behaviour for large n.
The authors [ 26 ] emphasised that many different forms for the F i t i could produce approximately the same observed F t , especially for large n , with the behaviour of F t being dominated by the small t behaviour of F i t. As a result, for sufficiently small times power-law behaviour for F t is likely, and if longer times were observable then an extreme value distribution would be expected [ 4 , 26 , 27 ]. However the power-law approximation can fail for important cases with extra rate-limiting steps such as a clonal expansion [ 5 — 7 ].
It seems likely that a model that includes clonal expansion and cancer detection is needed for cancer modelling, but the power law approximation could be used for all but the penultimate step, for example. A general methodology that includes this approach is described next, and examples are given in the subsequent section 6.
The results and examples of sections 5 and 6 are intended to have a broad range of applications. Some failures require a sequence of independent events to occur, each following the one before Fig 3. A well-known example is the Armitage-Doll multistage cancer model, that requires a sequence of m mutations failures , that each occur with a different constant rate.
The probability density for failure time is the pdf for a sum of the m independent times t j to failure at each step in the sequence, each of which may have a different probability density function f j t j. A general method for evaluating the probability density is outlined below, adapting a method described by Jaynes [ 13 ] page Writing , then gives, 13 To evaluate the integrals, take the Laplace transform with respect to t , to give, 14 This factorises as, 15 Giving a general analytical solution as, 16 where is the inverse Laplace transform, and with the same variable s for each value of j.
Eq 15 is similar to the relationship between moment generating functions of discrete probability distributions p i t i , and the moment generating function M s for , that has, 17 whose derivation is analogous to Eq 17 but with integrals replaced by sums. The survival and hazard functions for f t can be obtained from Eq 16 in the usual way. For example, 18 that can be used in combination with Eq 1.
A number of valuable results are easy to evaluate using Eq 16 , as is illustrated in the next section. Eq 20 can be solved using the convolution theorem for Laplace transforms, that gives, 21 which is sometimes easier to evaluate than two Laplace transforms and their inverse. In general, solutions can be presented in terms of multiple convolutions if it is preferable to do so. In that case an analogous calculation using a Fourier transform with respect to t in Eq 13 leads to analogous results in terms of Fourier transforms, with in place of Laplace transforms, resulting in, 22 Eq 22 is mentioned for completeness, but is not used here.
A general solution to Eq 16 can be given in terms of definite integrals, with, 23 This can sometimes be easier to evaluate or approximate than Eq A derivation is given in the Supporting Information S1 Appendix. This is discussed further in the Supporting Information S1 Appendix. Some of the results are well-known but not usually presented this way, others are new or poorly known.
We will use the Laplace transforms and their inverses , of, 25 and, For many situations such as most diseases, you are unlikely to get any particular disease during your lifetime. Then we have, A well known example of this approximation Eq 28 , is implicitly in the original approximate solution to the Armitage-Doll multi-stage cancer model.
Note that an equivalent time-dependence can be produced by a different combination of hazard functions with and steps, provided. Moolgavkar [ 14 ] used induction to provide an explicit formula for f t , with, 37 where, 38 For small times the terms in a Taylor expansion of Eq 37 cancel exactly, so that , as expected. This feature could be useful for approximating a normalised function when the early-time behaviour approximates an integer power of time. For example, consider m Gamma distributions with different integer-valued shape parameters p i , and.
Eq 42 is most easily evaluated with a symbolic algebra package. Alternatively, if e. If we also let e. An advantage of the method described above, is that it is often easy to calculate pdfs for sums of differently distributed samples. For the first example, consider two samples from the same or very similar exponential distribution, and a third from a different exponential distribution. More generally, it can be seen that a sum of exponentially distributed samples with different rates, smoothly approximate a gamma distribution as the rates become increasingly similar, as expected from Eq If a path to failure involves a combination of sequential and non-sequential steps, then the necessary set of sequential steps can be considered as one of the non-sequential steps, with overall survival given by Eq 1 and the survival for any sequential set of steps calculated from Eq 18 Fig 4.
For the purposes of modelling, a sequence of dependent or multiple routes can be regarded as a single step e. Clonal expansion is thought to be an essential element of cancer progression [ 29 ], and can modify the timing of cancer onset and detection [ 5 — 7 , 30 — 32 ]. The growing number of cells at risk increases the probability of the next step in a sequence of mutations occurring, and if already cancerous, then it increases the likelihood of detection.
Some cancer models have a clonal expansion of cells as a rate-limiting step [ 5 — 7 ]. For example, Michor et al. This gives a survival function for cancer detection of, 49 where, 50 a , c , are rate constants, and N is the total number of cells prior to cancer initiation. Alternatively, we might expect the likelihood of cancer being diagnosed to continue to increase with time since the cancer is initiated.
For example, a hazard function that is linear in time would give a Weibull distribution with. It is unlikely that either this or the logistic model would be an equally good description for the detection of all cancers, although they may both be an improvement on a model without either. Qualitatively, we might expect a delay between cancer initiation and the possibility of diagnosis, and diagnosis to occur almost inevitably within a reasonable time-period.
Therefore a Weibull or Gamma distributed time to diagnosis may be reasonable for many cancers, with the shorter tail of the Weibull distribution making it more suitable approximation for cancers whose diagnosis is almost inevitable. The possibility of misdiagnosis or death by another cause is not considered here. Taking , and using the convolution formula Eq 21 , we get, 51 where we integrated by parts to get the last line.
This may be written as, 52 with. Now consider non-independent failures, where the failure of A changes the probability of a failure in B or C. In general, if the paths to failure are not independent of each other then the situation cannot be described by Eq 1. Benjamin Cairns suggested exploring the following example—if step 1 of A prevents step 1 of B and vice-versa, then only one path can be followed.
As a consequence, Eq 1 may be inappropriate to describe phenomenon such as survival in the presence of natural selection, where competition for the same resource means that not all can survive. In some cases it may be possible to include a different model for the step or steps where Eq 1 fails, analogously to the clonal expansion model [ 6 ] described in Section 6.
But in principle, an alternative model may be required. We will return to this point in Section 9. The rest of this section limits the discussion to situations where the paths to failure are independent, but where the failure-rate depends on the order of events. An equivalent scenario would require m parts to fail for the system to fail, but the order in which the parts fail, modifies the probability of subsequent component failures. As an example, if three components A, B, and C, must fail, then we need to evaluate the probability of each of the 6 possible routes in turn, and obtain the overall failure probability from Eq 1.
Assuming the paths to failure are independent, then there are m! We can calculate this using Eq 16 , for example giving, 55 from which we can construct. Although in principle every term in e. Eqs 54 and 55 need evaluating, there will be situations where results simplify. For example, if one route is much more probable than another—e.
A more striking example is when there are very many potential routes to failure, as for the Armitage-Doll model where there are numerous stem cells that can cause cancer. If one route is much more likely than the others then both f t and h t can be approximated as a single power of time, with the approximation best at early times, and a cross-over to different power-law behaviour at later times.
Cancer is increasingly viewed as an evolutionary process that is influenced by a combination of random and carcinogen-driven genetic and epigenetic changes [ 2 , 3 , 21 , 29 , 33 — 37 ], and an evolving tissue micro-environment [ 38 — 41 ]. This highlights two limitations of the multi-stage model described here. As noted in Section 8, Eq 1 cannot necessarily model a competitive process such as natural selection, where the growth of one cancer variant can inhibit the growth of another.