Liquid Gardens, on 15 November 2012 - 04:08 AM, said:

So there may well be a 75% chance of one of the building collapsing given your system here, which I don't really buy anyway. Despite saying there is no explanation for a fire and damage based collapse that has been demonstrated within the bounds of reality. Even if I'm kind and cut it in half to 25% probability of collapse and we leave WTC7 out of it, that means, using your math, chances are 57% (75%x75%) in favor of neither building collapsing. On what planet is 43% in favor of at least one collapse outside of the bounds of reality? We can put it in your favor at 90% leaving a 19% chance of at least one collapse and I don't consider that outside of the bounds of reality either.

I need to clarify in a few places because I can see that you are not clear and getting arguments crossed – that’s my fault. I did warn that I am being overly generous to the official theory.

First, the system of generating probabilities used above is a very simplistic method based upon possible theoretical outcomes/simulations within a margin of error determined by NIST. It is important to understand how this works so that you know what the probability above really represents. If we visualise a line of increasing damage severity – at the left hand end is a less severe case, at the centre is a best estimate case, at the right hand end is a more severe case – this is the product of NIST’s simulations. Over half of that line (>51%), from the less severe case to the best estimate case, did

*not* produce collapse. I use this fact for no other reason than its simplicity, to point out that two tower collapses is somewhat against the odds. As we have seen, this method alone results in a 24% (49% x 49%) probability of both towers collapsing as the official theory and therefore a 76% probability that something other than the official theory occurs; one or less collapses.

Now that you have turned that second probability upon me as supporting at least one collapse, I will need to go into further detail than the simplification above. Here the next fact to realise is that not all points on our line above are of equal probability. The method NIST used to create the less severe case and more severe case involved simultaneous adjustment of numerous variable factors away from their best estimate to account for a margin of error. I.e. in the severe case the tower structure was made weaker, the airliner was made stronger and faster and the angle of impact adjusted to impart more energy to the core columns, etc. The more variables that are adjusted in one direction to favour a particular outcome, the more unlikely the case becomes. That is to say the area around the best estimate case on our visualised damage severity line has a much greater probability of occurrence than either left or right hand end of the line where the less severe case and more severe case lay. It’s somewhat parallel to the probability of a sequence of coin tosses (ten in this example): -

0 heads =

**0.10%** equivalent to less severe case (no collapse initiation)

1 head = 0.98%

2 heads = 4.39%

3 heads = 11.72%

4 heads = 20.51%

5 heads =

**24.61%** equivalent to best estimate case (no collapse initiation)

6 heads = 20.51%

7 heads = 11.72%

8 heads = 4.39%

9 heads = 0.98%

10 heads =

**0.10%** equivalent to severe case (collapse initiation)

Or in graph format (showing standard deviation from a best estimate): -

The x axis represents our visualised line of possible theoretical outcomes/damage severity. So, although NIST did not determine the exact non-collapse to collapse cross over point, let’s suppose that 70% of the line represents no collapse and 30% represents the tower survival. Using the figures from our coin example, we are actually looking at 70% of the line (from 0-7 heads) representing a 95% probability (0.10 + 0.98 +4.39 +11.72 +20.51 +24.61 + 20.51 + 11.72) of non-collapse. This now leaves the probability of both towers collapsing at a quarter of a percent (5% x 5%).

And

*still* I am being

*extremely* generous to the official theory as we have not even begun accounting for the fire severity which NIST also ‘turned up’ and the additional manual inputs NIST required

*on top of* the severe case to induce collapse initiation in the model. That’s another notable point by the way – all the time here we are dealing only with the collapse initiation, not the progression (whose own problems are dealt with separately).

The last point to note is that you are intermixing my arguments. My reference to ‘within the bounds of reality’ is an add-on to all of the above. Going back to our visualisation of the line representing lesser and greater theoretical outcomes of damage severity, it is apparent through comparison with photographic evidence, that a segment of that

*theoretical* line never existed in

*reality*. I.e. the damage produced by NIST’s theoretical severe case exceded the damage severity seen in reality. This is the area that I deem ‘beyond reality’. What this means is that a part of the line which theoretically produces collapse in the model must be taken away. Going back to the coin example, we would perhaps be looking at 0-7 no collapse, 8 collapse, 9-10 beyond reality (let’s say we have already seen 2 coins land on tails and know that 9-10 heads are now impossible to achieve from our 10 coin tosses). The probability for the official theory shrinks ever further. It is also quite possible in the WTC case, and this is what I believe, that 0-7 produces no collapse and 8-10 is beyond reality. This is quite permissible within NIST’s results, and where does that leave the official collapse theory? At a minimum, NIST needed to show that the cross over point from non-collapse to collapse was within the bounds of reality, but they declined to do so, thus failing to prove a collapse as any sort of possibility at all.

Once we throw in WTC7 (three such unlikely collapses in one day) the probability becomes astronomically small – simply unworthy of consideration.

Liquid Gardens, on 15 November 2012 - 04:08 AM, said:

I think this is all a game anyway as we and NIST also cannot know the exact percentages because it was simply not possible as they do not have the data necessary to accurately determine these probabilities to the granularity that is required for the argument you are attempting to make. Thus saying 'the mid-point base case did not produce a collapse' doesn't really say so much; the significance of the mid-point case is reliant in large part on being able to accurately determine these probabilities. I suspect that NIST recognized this also.

The NIST study is the most accurate simulation of the towers, impact damage and fires, with the most scientifically sound results, that are available. There is no reason that the physics and model properties should not be reasonably accurate given the known data – the observable match between the simulated base case and reality was, after all, very good, thus validating the results. As those results demonstrated the collapses to be ‘unlikely’ (being generous – see above), I do understand why you would like to discredit the inherent probabilities in this case. I’m not sure it’s exactly unbiased of you, but I do understand it’s the best route your argument has to go. Personally I must accept the results as I find them.

Liquid Gardens, on 15 November 2012 - 04:08 AM, said:

It's not what we would call a scientific conclusion then, correct? NIST nor most any scientist would say, 'this study has demonstrated that the probability of a collapse occurring is definitely less than 50%, we are confident we have enough data to determine that', you agree? But you don't agree with why they wouldn't say that?

It is unlikely that any NIST scientist would publicly admit it (well, James Quintiere, NIST's former Chief of Fire Science did:

**"the WTC investigation by NIST falls short of expectations by not definitively finding cause"**), but I don’t see that any other probability can be derived from the scientific results. The only saviour which could turn probability in favour of the official theory would be if NIST had made an unfathomable mess of one or more of their estimates including margin of error somewhere – I have no reason to believe this is the case, do you?