A New Approach to Multidisciplinary System Analysis under

aleatory uncertainty analysis

aleatory uncertainty analysis - win

The decennial cycle in geopolitics: with COVID, 2020 served up its expected surprise. Now what about 2030?

On April 12 2001, Donald Rumsfeld shared the following memo written by DoD staff member Linton Wells II -
If you had been a security policy-maker in the world's greatest power in 1900, you would have been a Brit, looking warily at your age-old enemy, France.
By 1910, you would be allied with France and your enemy would be Germany.
By 1920, World War I would have been fought and won, and you'd be engaged in a naval arms race with your erstwhile allies, the U.S. and Japan.
By 1930, naval arms limitation treaties were in effect, the Great Depression was underway, and the defense planning standard said "no war for ten years."
Nine years later World War II had begun.
By 1950, Britain no longer was the worlds greatest power, the Atomic Age had dawned, and a "police action" was underway in Korea.
Ten years later the political focus was on the "missile gap," the strategic paradigm was shifting from massive retaliation to flexible response, and few people had heard of Vietnam.
By 1970, the peak of our involvement in Vietnam had come and gone, we were beginning détente with the Soviets, and we were anointing the Shah as our protégé in the Gulf region.
By 1980, the Soviets were in Afghanistan, Iran was in the throes of revolution, there was talk of our "hollow forces" and a "window of vulnerability," and the U.S. was the greatest creditor nation the world had ever seen.
By 1990, the Soviet Union was within a year of dissolution, American forces in the Desert were on the verge of showing they were anything but hollow, the U.S. had become the greatest debtor nation the world had ever known, and almost no one had heard of the internet.
Ten years later, Warsaw was the capital of a NATO nation, asymmetric threats transcended geography, and the parallel revolutions of information, biotechnology, robotics, nanotechnology, and high density energy sources foreshadowed changes almost beyond forecasting.
All of which is to say that I'm not sure what 2010 will look like, but I'm sure that it will be very little like we expect, so we should plan accordingly.
I think you could maybe nitpick some holes in it for historical accuracy, but the basic point - that geopolitical tides in the twentieth century varied dramatically at ten year intervals - is a cogent one, and its point is underscored by the fact that five months after it was written, the world's whole geopolitical outlook was upended catastrophically by 9/11.
Contrary to the pattern, you might have thought that the security situation in 2020 looked quite similar to that in 2010. Sure, we've had the Arab Spring, a horrible civil war in Syria, and the Russia invasion of the Ukraine, but the basic geopolitical parameters for the West remained the same as those in 2010 - Islamic radicalism as the major enemy abroad, increasing worries about a revanchist Russia, and the long-term rise of China casting a growing shadow over American hegemony. From a Western perspective, Trump's America First policy and Brexit have been probably been biggest geopolitical shocks, but my sense is that both will turn out to be geopolitically fairly inconsequential long-term, and the wheels of the Western liberal order will accommodate and incorporate and co-opt them over time.
However, as if by some law of nature, COVID has emerged to ensure the ten year cycle of surprise remains intact. In addition to the disruptive effects of the pandemic itself, we're now seeing a hardening of attitudes toward China, a move away from global supply chains, and a limited revival of the popularity of autarky as a political concept. So let's call coronavirus the '2020 surprise'.
Three questions I'd enjoy hearing people's thoughts on.
First, is Linton Wells' claim that geopolitics looks radically different every ten years really true? To what extent is it an artefact of the selective facts he's presented?
Second, pre-coronavirus, is it fair to say the 2020 geopolitical outlook was broadly similar to the 2010 outlook?
Third - and by far the most interesting - what sort of surprise may be lying in wait in 2030?
I realise that it's silly to ask people to predict true Black Swans, which are by definition unpredictable, emerging from aleatory rather than epistemic uncertainty. But looking back at Wells' list, it's clear that not every decennial paradigm shift was a Black Swan. Despite Wells's analysis, for example, many people in the British security establishment as well as in popular culture correctly foresaw that Germany was a bigger long-term threat to the hegemony of the UK than France (for a famous example see the 1871 novella The Battle of Dorking). So it's not crazy to think we might try to get a bit ahead of the cycle.
So what unexpected shifts might lie ahead?
Let me toss out just one, very briefly, without much in the way of elaboration: I think Russia has the potential to serve as a source of real geopolitical disruption in the coming decade, specifically in relation to the post-Putin order. As Putin steps back from 2024 onwards, there's the potential for major realignments, especially in light of the fact that oil and gas revenues (providing roughly half of the government budget) may well be in long-term decline. The most extreme and catastrophic scenario would be internal struggles leading to outright military competition among competing factions and potentially even civil war. While I think this possibility is worth keeping on our radar - just because of how catastrophic it could be - it seems fairly unlikely to me. More realistically, however, I can see some major and significant geopolitical realignments that might follow from a shift in the ideological outlook of Putin's successors. One possible scenario, for example, would be a new 'Sino-Soviet split' in which Russia realigns with the west in fear of nascent Chinese power.
I realise that's an underdeveloped suggestion, but I wanted to mention it, partly to stick a flag in it, and partly as a goad to discussion. I'd be interested to hear what others think!
submitted by Doglatine to geopolitics [link] [comments]

[R] Satellite conjunction analysis and the false confidence theorem

TL;DR New finding relevant to the Bayesian-frequentist debate recently published in a math/engineering/physics journal.
Paper with the same title as this post was published 17 July 2019 in the Proceedings of the Royal Society A: Mathematical, Physical, and Engineering Sciences.
Some excerpts ...
From the Abstract:
We show that probability dilution is a symptom of a fundamental deficiency in probabilistic representations of statistical inference, in which there are propositions that will consistently be assigned a high degree of belief, regardless of whether or not they are true. We call this deficiency false confidence. [...] We introduce the Martin–Liu validity criterion as a benchmark by which to identify statistical methods that are free from false confidence. Such inferences will necessarily be non-probabilistic.
From Section 3(d):
False confidence is the inevitable result of treating epistemic uncertainty as though it were aleatory variability. Any probability distribution assigns high probability values to large sets. This is appropriate when quantifying aleatory variability, because any realization of a random variable has a high probability of falling in any given set that is large relative to its distribution. Statistical inference is different; a parameter with a fixed value is being inferred from random data. Any proposition about the value of that parameter is either true or false. To paraphrase Nancy Reid and David Cox,3 it is a bad inference that treats a false proposition as though it were true, by consistently assigning it high belief values. That is the defect we see in satellite conjunction analysis, and the false confidence theorem establishes that this defect is universal.
This finding opens a new front in the debate between Bayesian and frequentist schools of thought in statistics. Traditional disputes over epistemic probability have focused on seemingly philosophical issues, such as the ontological inappropriateness of epistemic probability distributions [15,17], the unjustified use of prior probabilities [43], and the hypothetical logical consistency of personal belief functions in highly abstract decision-making scenarios [13,44]. Despite these disagreements, the statistics community has long enjoyed a truce sustained by results like the Bernstein–von Mises theorem [45, Ch. 10], which indicate that Bayesian and frequentist inferences usually converge with moderate amounts of data.
The false confidence theorem undermines that truce, by establishing that the mathematical form in which an inference is expressed can have practical consequences. This finding echoes past criticisms of epistemic probability levelled by advocates of Dempster–Shafer theory, but those past criticisms focus on the structural inability of probability theory to accurately represent incomplete prior knowledge, e.g. [19, Ch. 3]. The false confidence theorem is much broader in its implications. It applies to all epistemic probability distributions, even those derived from inferences to which the Bernstein–von Mises theorem would also seem to apply.
Simply put, it is not always sensible, nor even harmless, to try to compute the probability of a non-random event. In satellite conjunction analysis, we have a clear real-world example in which the deleterious effects of false confidence are too large and too important to be overlooked. In other applications, there will be propositions similarly affected by false confidence. The question that one must resolve on a case-by-case basis is whether the affected propositions are of practical interest. For now, we focus on identifying an approach to satellite conjunction analysis that is structurally free from false confidence.
From Section 5:
The work presented in this paper has been done from a fundamentally frequentist point of view, in which θ (e.g. the satellite states) is treated as having a fixed but unknown value and the data, x, (e.g. orbital tracking data) used to infer θ are modelled as having been generated by a random process (i.e. a process subject to aleatory variability). Someone fully committed to a subjectivist view of uncertainty [13,44] might contest this framing on philosophical grounds. Nevertheless, what we have established, via the false confidence phenomenon, is that the practical distinction between the Bayesian approach to inference and the frequentist approach to inference is not so small as conventional wisdom in the statistics community currently holds. Even when the data are such that results like the Bernstein-von Mises theorem ought to apply, the mathematical form in which an inference is expressed can have large practical consequences that are easily detectable via a frequentist evaluation of the reliability with which belief assignments are made to a proposition of interest (e.g. ‘Will these two satellites collide?’).
[...]
There are other engineers and applied scientists tasked with other risk analysis problems for which they, like us, will have practical reasons to take the frequentist view of uncertainty. For those practitioners, the false confidence phenomenon revealed in our work constitutes a serious practical issue. In most practical inference problems, there are uncountably many propositions to which an epistemic probability distribution will consistently accord a high belief value, regardless of whether or not those propositions are true. Any practitioner who intends to represent the results of a statistical inference using an epistemic probability distribution must at least determine whether their proposition of interest is one of those strongly affected by the false confidence phenomenon. If it is, then the practitioner may, like us, wish to pursue an alternative approach.
[boldface emphasis mine]
submitted by FA_in_PJ to statistics [link] [comments]

aleatory uncertainty analysis video

Industrial Analysis and Design Charles Hirsch Professor, em. Vrije Universiteit Brussel President, NUMECA Internationa . 2 MUSAF2 Toulouse, September 2013 The Role of Uncertainties in VP Uncertainty quantification and management has been recognized in the last few years as a major component of Virtual Prototyping and risk management in industrial design • Introducing the probabilistic nature Development of Uncertainty Methodologies and Analysis Using Logic Trees for Levee Risk Assessments. June 2017; DOI: 10.1061/9780784480724.021. Conference: GeoRisk2017; At: Denver; Authors: Robert under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or ability or risk analysis problem that involves a set of input variables x= (x1,K,xn) Should the uncertainty in X be categorized as aleatory or epistemic? The answer depends on the circumstances. If the desired strength is that of the concrete in an existing building, then the uncertainty should be categorized as epistemic if it is decided that specimens taken from the build- ing can be Uncertainty analysis is presented as an integration problem involving probability spaces for stochastic and subjective uncertainty. Approximation procedures for the underlying integrals are described that provide an assessment of the effects of stochastic uncertainty, an assessment of the effects of subjective uncertainty, and a basis for performing sensitivity studies. Extensive use is made Our analysis shows that the aleatory uncertainty associated with making catchment simulations using this data set is significant ( 50%). Further, estimated epistemic uncertainties of the HyMod, SAC-SMA, and Xinanjiang model hypotheses indicate that considerable room for model structural improvements remain. Citation: Gong, W., H. V. Gupta, D. Yang, K. Sricharan, and A. O. Hero III (2013 The interpretation of pure aleatory uncertainty is carried out as an extensional measure of relative frequency, while the interpretation of pure epistemic uncertainty is conducted as an intentional measure of confidence. In this manner, using relative frequency may trigger more aleatory thinking than drawing out probability numbers. Several studies indicate that erroneously judging a combined probable and improbable event as more likely to happen than an improbable event alone, occurs less In the context of this paper, aleatory uncertainty is specifically defined as the uncertainty in the phenomena under analysis (i.e., the natural hazard and the structural response), and epistemic uncertainty is defined as the uncertainty related to the decision analysis model. Abstract: This paper develops an efficient probabilistic approach for uncertainty propagation in multidisciplinary system analysis (MDA) under aleatory uncertainty (i.e., natural or physical variability). A decoupled approach is used in this paper to un-nest the multidisciplinary system analysis from the probabilistic analysis to achieve computational efficiency. 5 Aleatory Variability and Epistemic Uncertainty Aleatory variability and epistemic uncertainty are terms used in seismic hazard analysis that are not commonly used in other fields, but the concepts are well known. Aleatory variability is the natural randomness in a process. For discrete variables, the randomness is parameterized by the probability of each possible value. For continuous

aleatory uncertainty analysis top

[index] [9471] [5931] [17] [9545] [504] [9272] [6180] [9955] [9839] [7671]

aleatory uncertainty analysis

Copyright © 2024 m.realmoneygametop.xyz