Jump to content
Join the Unexplained Mysteries community today! It's free and setting up an account only takes a moment.
- Sign In or Create Account -

Fuggedaboutit Physics


Startraveler

Recommended Posts

A philosopher presents the view that physical laws that forbid something are actually very helpful for clarifying the concepts (though searching for loopholes). Even though many find them to be dissatisfying. It's interesting to think about.

PhysicsWeb:

Many principles of physics are of the form "If you do this, what will happen is that." Newton's second law, for example, says that the acceleration of a particular mass will be proportional to the force applied to it. Such principles imply that certain effects are practically impossible. A small number of principles, however, belong to a different category. These say, in effect, "That cannot happen." Such principles imply that certain effects are physically impossible.

Rebels with a cause

Notorious examples of the latter include the first two laws of thermodynamics. The first law says that energy cannot be created or destroyed ("You can't win"), while the second can be stated in several forms, such as that heat cannot be transferred from a colder to a warmer body or that the entropy of a closed system always increases ("You can't break even, either"). Other examples include Heisenberg's uncertainty principle and the relativity principles regarding the impossibility of recognizing absolute velocity and the prohibition of faster-than-light travel.

Such principles often represent not "new physics" but deductions from other principles. What is different about them is their form. And to say that something is physically impossible tends to make scientists want to rebel.

No way

The physics of impossibility goes by several names. "Forget-about-it" physics is one; "noway" physics is another. Half a century ago, the mathematician and historian of science Sir Edmund Whittaker referred to "postulates of impotence", which assert "the impossibility of achieving something, even though there may be an infinite number of ways of trying to achieve it".

"A postulate of impotence", Whittaker wrote, "is not the direct result of an experiment, or of any finite number of experiments; it does not mention any measurement, or any numerical relation or analytical equation; it is the assertion of a conviction, that all attempts to do a certain thing, however made, are bound to fail."

Postulates of impotence thus resemble neither experimental facts nor mathematical statements true by definition. Nevertheless, such postulates are fundamental to science. Thermodynamics, Whittaker said, may be regarded as a set of deductions from its postulates of impotence: the conservation of energy and of entropy. It may well be possible, he argued, that in the distant future each branch of science will be able to be presented, à la Euclid's Elements, as grounded in its appropriate postulate of impotence.

Contrarians

But no-way physics is important to science for another reason: it attracts contrarians. I am not talking about the endless attempts by frauds and naifs to get round the laws of thermodynamics by creating perpetual-motion machines. Rather, I mean serious physicists who find no-way physics a challenge to devise loopholes. In seeking these loopholes, they end up clarifying the foundations of the field.

Contrarian physicists played a key role in both the discovery and the interpretation of the uncertainty principle. In 1926 Werner Heisenberg was promoting his new matrix mechanics – a purely formal approach to atomic physics – by claiming that physicists had to abandon all hope of observing classical properties such as space and time. Pascual Jordan played the contrarian by devising a thought experiment to get round such claims.

Jordan argued that if one could freeze a microscope to absolute zero, then it should be possible to measure the exact position of an electron, say, or the time of a quantum leap. This seems to have inspired Heisenberg to think about the interaction between the observing instrument and the observed situation, which led him to the uncertainty principle. Jordan, the contrarian, forced Heisenberg to think operationally rather than philosophically, and to clarify the physics of the situation.

Another example of contrarian physics was James Clerk Maxwell's thought experiment involving a tiny creature who operates a small door in a partition inside a sealed box. By opening and shutting the door, the "demon" – as it was later called – lets all the faster-moving molecules into one side of the partition, violating the second law of thermodynamics by getting heat to flow to that side. The discussion of this thought experiment helped to clarify the then-mysterious concepts of thermodynamics.

The critical point

Heisenberg once wrote, "Almost every progress in science has been paid for by a sacrifice, for almost every new intellectual achievement previous positions and conceptions had to be given up. Thus, in a way, the increase of knowledge and insight diminishes continually the scientist's claim on 'understanding' nature."

Heisenberg is overstating the point: surely the advance of science involves developing more subtle and complex concepts that encompass the simpler existing ones. But these more subtle and complex concepts are often produced by those who are dissatisfied by the prospect of having to make the kind of sacrifice Heisenberg mentions.

Dissatisfaction is a powerful driving force in science, and it can arise in many ways. Sometimes it springs from a scientist's sense that a confusing heap of experimental data can be better organized. At other times it arises from the feeling that a theory is too complicated and can be simplified, or that its parts are not fitting together properly. Still other dissatisfactions arise from mismatches between a theory's predictions and experimental results.

No-way physics produces a special kind of dissatisfaction, involving the collision of science with our hopes and dreams – of limitless energy, of superluminal travel, of pinning things to specific places at specific times. Humans seem hard-wired to have such hopes, and hard-wired to balk at the science that dashes them. Small wonder then that no-way physics leaves them dissatisfied. But science wins in the end.

About the author

Robert P Crease is chairman of the Department of Philosophy, Stony Brook University, and historian at the Brookhaven National Laboratory, US

Link to comment
Share on other sites

  • Replies 41
  • Created
  • Last Reply

Top Posters In This Topic

  • Startraveler

    15

  • Tiggs

    14

  • Essene

    4

  • camlax

    1

So tell us, Startraveler, what exactly is your position?

Do you abide by the "laws", or do you attempt to defy those "laws"?

Do you consider a mathematical 'proof' as absolute, or do you say 'Well, all it is is mathematics, and mathematics isn't capable of experimentation, so it is possible that it can be done despite the 'proof'.

Link to comment
Share on other sites

I do tend to believe the universe follows rules which remain fairly constant over time and can be easily expressed in terms of mathematics. Experiments are worthless unless we use them to learn some lesson about the underlying rules. If we were to have some amnesia after every experiment, no experiment would ever add to our knowledge base. All legitimate physics is derived from experimental results (or, in some cases, logical deductions from thought experiments incorporating certain principles that are then checked via experiment). Mathematical proofs are indeed, in a sense, absolute. Time and again we've seen already-developed mathematical concepts being applied to the physical world: vector calculus applied to the experimental facts of electromagnetism (the mathematical formulation itself revealing new insights and predictions), Riemannian geometry used to describe spacetime, linear algebra utilized to describe quantum phenomena. Do quantum particles have to obey an eigenfunction expansion theorem proved by mathematicians? I don't know if they have to or not, but we've seen that they just do. So do I think we can defy the "laws"? Well, for example, I don't think we can measure two non-commutating properties at once because it doesn't seem that they both have well-defined existences at once. It's difficult to get around something like that. But I imagine there are other rules that can be bent with enough ingenuity. It's difficult to make a blanket statement, it has to depend on which law or principle we're looking at.

Edited by Startraveler
Link to comment
Share on other sites

To be honest, even though Einstein and Hawking have both said that FTL (Faster Than Light) travel is impossible, it is all based on extrapolated mathematical evidence. I do not believe that mathematics is suffiiently rigorous in terms of external influences to be able to make that assertion. I believe that FTL is possible, regardless of the supposed relativistic consequences of such theories. Can i prove it? No, absolutely not, but I think that Applied Physicist will be able t prove it eventually.

Link to comment
Share on other sites

So do I think we can defy the "laws"? Well, for example, I don't think we can measure two non-commutating properties at once because it doesn't seem that they both have well-defined existences at once. It's difficult to get around something like that.

I've always wondered about that - what if it's possible to set up an experiment where one of those properties is forced into being a known value - a clever array of polarising filters, for example - would that mean that it would be impossible to perform any other measurement?

Link to comment
Share on other sites

Yes, it would be impossible. Let me try and explain in a slightly mathematical way (and we slip between math and experiment because it works, we can). I'm not sure how much you know about quantum mechanics or linear algebra so if you already know this stuff, my apologies. In quantum mechanics, we often deal with mean values for observables like position or momentum called expectation values (this is actually a concept from statistics and the integral calculation required is borrowed from there). It is possible, however, to get an exact value instead of having to rely on mean values if a certain condition is met.

Suppose we have an operator, A. Operators are recipes for doing something (something like "take a derivative and then multiply by i"). They act on special functions called wavefunctions, ψ (the main characters of QM), which contain all the information about the physical system we're dealing with. Perhaps the ψ we happen to be dealing with obeys the equation Aψ = aψ. This is an eigenvalue equation and it means we took that recipe--the operator--and applied it to our wavefunction ψ and what we got out was that ψ back again (multiplied by a coefficient, a, we call the eigenvalue). A wavefunction that obeys this equation is obviously pretty special and we call it an eigenfunction of A.

Let me take a quick digression to explain what that equation corresponds to physically. Operators represent observables like position or momentum. The wavefunction basically contains all the information abut the system in question. The eigenvalue is the value of the observable that we'll actually measure in an experiment. In the case that ψ is an eigenfunction of A, we don't have to worry about mean values (expectation values) but we can instead determine the value with unlimited precision. That value will be a, the eigenvalue.

Suppose we have another observable of our system in mind, represented by the operator B. In order to determine the expectation value of that observable to unlimited precision (and not just get a mean value), the system has to also obey the equation Bψ = bψ, where b is the eigenvalue associated with that operator. So in order to determine both physical observables with unlimited precision, our wavefunction ψ for the system has to satisfy both equations. That is, it has to be a simultaneous eigenfunction of both the A and the B operator.

Now we can look at the commutator of the operators which is just the quantity [A, B] = AB - BA. Lets hit the wavefunction ψ with that: [A,B]ψ = (AB -BA)ψ = ABψ - BAψ. We know what Bψ and Aψ are from those above eigenvalue equations, since we're working under the assumption we want both to be satisfied. So that last term is ABψ - BAψ = Abψ - Baψ. We can pull those eigenvalues a and b through the operators since they're just numbers and we have bAψ - aBψ which we know is baψ -abψ = 0. In other words the commutator [A,B] = 0--the operators commute.

That is, if we have two observables (represented by operators in quantum mechanics) then they have to commute if we want to measure both observables with unlimited precision. The position and momentum operators, for example, do not commute.

You asked if its physically impossible to perform a second measurement after you've made one measurement with your clever array of polarizing filters. The array of filters is really a set of instructions of things to do to our wavefunction as it passes through, it's a recipe. That is, it's an operator. The measurement process is itself the physical realization of hitting the wavefunction ψ with an operator. So can we do one measurement to the system to get a value with unlimited precision then do another measurement to get a different value also with unlimited precision? Only if the two operators (measurements) in question are commutative can they be done and yield the desired results. What it really amounts to saying is that there's no two experiments you could devise that could yield the desired precision results unless the commutativity condition is met. There are operators that do commute and allow these sorts of double precision results (quantities along different axes, for example) and obviously for those you could set up a set of two measurements to get the results you want.

Apologies to anyone this post confuses, annoys, or exasperates.

Edited by Startraveler
Link to comment
Share on other sites

Yes, it would be impossible....

Ah. Beautifully explained, Startraveller. Thank you. I think I understand. In terms of simple algebra - the operators for multiplication and addition would be considered commutative, as a*b = b*a and a+b = b+a, whereas division and subtraction are not as a-b != b-a and a/b != b/a.

At least now I can see the problem clearly. Thanks once again.

Edited by Tiggs
Link to comment
Share on other sites

I've always wondered about that - what if it's possible to set up an experiment where one of those properties is forced into being a known value - a clever array of polarising filters, for example - would that mean that it would be impossible to perform any other measurement?

Well, the question at that point becomes: are you still measuring a property as it exists normally or has your "clever array" made a change to it?

Link to comment
Share on other sites

Ah. Beautifully explained, Startraveller. Thank you. I think I understand. In terms of simple algebra - the operators for multiplication and addition would be considered commutative, as a*b = b*a and a+b = b+a, whereas division and subtraction are not as a-b != b-a and a/b != b/a.

Yes, absolutely. You can get an idea of why AB /= BA in some circumstances if you think of A and B as being matrices. Matrix multiplication involves multiplying the row elements of one matrix against corresponding elements in the column of the other and adding products together to get the corresponding elements of the product matrix. Since this involves a slightly more complicated recipe then just multiplying two numbers, it's not difficult to see why the order of the matrices will usually matter. Indeed, operators can often be represented as matrices.

In the position basis, the position operator is just x and the x-component of the momentum operator is -ih d/dx (that h is actually Plank's constant h divided by 2 pi: h-bar). So you can see that if we were to hit a function ψ with xp = -x ih d/dx we'd, of course, get - x ih dψ/dx. If we were to go the other way and hit ψ with px = -ih d/dx x then we'd instead have -ih d(xψ)/dx which requires using the product rule to get -ih(x dψ/dx + ψ). So if we remember that commutator [x, p]ψ = (xp - px)ψ = -ihx dψ/dx + ih(x dψ/dx + ψ) = ih ψ.

This is a very important result called the canonical commutator: [x,p] = ih. Again, you can see that these don't commute primarily because the order in which you put them determines whether the x sits outside the derivative or if its forced into it, along with the function we operate on.

At least now I can see the problem clearly. Thanks once again.

No problem.

Well, the question at that point becomes: are you still measuring a property as it exists normally or has your "clever array" made a change to it?

That's one of the primary questions involved in trying to make sense of quantum mechanics (i.e. the famous interpretations of quantum mechanics). Are we measuring pre-existing properties or are we causing something to happen with our apparatuses that wasn't happening before? It's interesting because if it's the former, then something isn't quite right with quantum mechanics because it can't predict for sure the outcome of a measurement, it can only give us probabilities of this or that happening. There must be some extra information hidden somewhere in a way we haven't quite figured out yet. If it's the latter then we have a strange situation where a system seems to exist in multiple states at once before we send it through our apparatus and then it falls into one particular state when we do send it through.

The evidence seems to indicate that the latter possibility is in fact what's happening but the book is far from closed on this one. When I get time, I actually want to write up a thread on my (and others') thoughts on the nature of science/physics itself using this question as an example--a long, musing, quasi-philosophical thread. It'll have to wait for now, though.

Link to comment
Share on other sites

That's one of the primary questions involved in trying to make sense of quantum mechanics (i.e. the famous interpretations of quantum mechanics). Are we measuring pre-existing properties or are we causing something to happen with our apparatuses that wasn't happening before? It's interesting because if it's the former, then something isn't quite right with quantum mechanics because it can't predict for sure the outcome of a measurement, it can only give us probabilities of this or that happening. There must be some extra information hidden somewhere in a way we haven't quite figured out yet. If it's the latter then we have a strange situation where a system seems to exist in multiple states at once before we send it through our apparatus and then it falls into one particular state when we do send it through.

It's interesting that you should mention that.

What if the properties of the system are constantly changing all of the time, rather than being in superposition?

As far as I can see, Bell's inequality only holds true if you make the assumption that the properties are static - which for waves, for example, just doesn't feel right...

Link to comment
Share on other sites

If we make a measurement (cause the collapse) and we repeat the measurement quickly enough we'll get the same value. If you were of what one might call the realist persuasion, you'd say that of course this should be the case--the measurement returns a state the system was already in before we measured and will continue to be in (assuming we don't change things too much) after we stop looking. In other words, there's no collapse because things are already in some state and we just happen to look sometimes. The more orthodox position is that the system was in the superposition before we measured and the reason a repeated measurement will give the same value is that something indeed forced some kind of collapse (though the system quickly begins to spread out and evolve according to the Schrodinger equation again). In the view you're suggesting, I guess we'd take that time between the initial measurement and the time where we could expect to get a different value in a repeated measurement as a sort of limit on the speed at which the system cycles through the possible states. But suppose we keep making repeated measurements (i.e. keep the wavefunction from spreading out)--the cycling through states will cease and somehow the measurement operation will have halted the natural constant changing of the system. I think that would raise its own questions--within that particular conception--of what's going on.

But let me ask you a question. Suppose we take a particular observable (which we've already said is represented by an operator in quantum mechanics). Maybe this observable is of a variety that's said to have a continuous spectrum, meaning instead of having discrete eigenvalues (eigenvalues being the numbers actually measured in experiments) the eigenvalues actually have a continuous spread. Position is an example of an observable having a continuous spread like that. So if you're suggesting a system is at a particular position and cycles through the different possible positions with time, I'm imagining a system at a particular position and smoothly rolling through the continuous range of possibilities. How does this differ from a "realist" view of a particle having a particular position (or at least a particular range consistent with the uncertainty principle) and just moving with time? It seems like your suggestion is--in some cases, like this one--no different from the idea that a system possesses a particular value for a variable even before we measure it. You're explicitly saying that the system is continuously evolving but I don't know that this really distinguishes your approach from the regular realist one (which, as you've pointed out, runs into trouble with things like Aspect's test of Bell's inequalities).

Another thing to keep in mind is that in QM we can deal with quantum ensembles--large groups of particles prepared in the same state. That way we can do multiple measurements without having to worry about the effects we have on each particle just by measuring it. Suppose we have such an ensemble with two possible eigenstates it can be measured in; let's also suppose that there's a one-third probability we'll measure a particle in this state to be in the first eigenstate and a two-thirds probability it will be in the second eigenstate. So what we're saying is that there's a roughly 33% chance that a measurement of a particle's state will yield eigenvalue 1 and a roughly 67% change we'll get eigenvalue 2 out of the measurements. Of course when we actually measure one particle in this state, we'll get one eigenstate or the other and we won't have shed a whole lot of light on those probabilities we dealt with prior to measurement. But with an ensemble of many particles in this state we can keep measuring different particles and we should find that we measure roughly a third of the particles to yield the first eigenvalue and roughly two-thirds to yield the second eigenvalue. The orthodox bunch will interpret this as reflecting the projections or probabilities as just that--a probability that a particle in that state will collapse this way or that. The behavior of the ensemble is just a reflection of these probabilities playing out on a larger scale--that is, the law of large numbers steps in here (the same way that an individual quarter having a 50-50 probability of landing heads or tails results in a large number of tosses yielding results that are roughly half heads and half tails). The realist would, I think, say that the math just told us that in such an ensemble one-third of those particles would be in the first eigenstate all along and two-thirds would be in the second all along with no superpositions. But in your view of the particles cycling back and forth between the two possible eigenstates, it seems like if I keep measuring particles at random intervals I could fail to get the correct ratios.

Of course, if you're interpreting the 1/3 and 2/3 probabilities to mean that an individual particles spends 1/3 of its time in the first eigenstate and 2/3 in the second, then the law of large numbers should kick in again and help you recover the correct results (in other words, it seems you would have to use that interpretation to make this match up with experiment). But now you've got strange questions about why these particles exist in a definite state but seem to be on a timer that flips them to in a new one (somehow) at the correct time--and this without any (apparent) operation like measurement acting on them to make it happen. I'm not sure this any more appealing than any of the other viewpoints. More than that, remember in the first paragraph we put a sort of limit on the speed at which a system cycles through the possibilities. Imagine we had a system with more than just two possible states coming out of the measurement--how does a system divvy up the time it spends in each possible state if there are now 10, 100, 1000, etc possible states it must cycle through (but divvying up the time in accordance with the numbers usually interpreted to be probabilities)?

Edited by Startraveler
Link to comment
Share on other sites

I do tend to believe the universe follows rules which remain fairly constant over time and can be easily expressed in terms of mathematics. Experiments are worthless unless we use them to learn some lesson about the underlying rules. If we were to have some amnesia after every experiment, no experiment would ever add to our knowledge base. All legitimate physics is derived from experimental results (or, in some cases, logical deductions from thought experiments incorporating certain principles that are then checked via experiment). Mathematical proofs are indeed, in a sense, absolute. Time and again we've seen already-developed mathematical concepts being applied to the physical world: vector calculus applied to the experimental facts of electromagnetism (the mathematical formulation itself revealing new insights and predictions), Riemannian geometry used to describe spacetime, linear algebra utilized to describe quantum phenomena. Do quantum particles have to obey an eigenfunction expansion theorem proved by mathematicians? I don't know if they have to or not, but we've seen that they just do. So do I think we can defy the "laws"? Well, for example, I don't think we can measure two non-commutating properties at once because it doesn't seem that they both have well-defined existences at once. It's difficult to get around something like that. But I imagine there are other rules that can be bent with enough ingenuity. It's difficult to make a blanket statement, it has to depend on which law or principle we're looking at.

Interesting topics, I have 4 years of Solid State Physics. Great topical depth with the mathematics, I always thought of advanced mathematics as a useful scientific language of man, that often is good at describing things in nature, Classical or Modern (Quantum). Wave functions (star-psi-star- Heisenberg) only a probability, according to Heisenberg, not due to ignorance of the system. One of my professors told me, Calculus will not give you the answer to anything (approximations), but often it will get you infinitly close to the answer. Just Food for thought and conversation. Thanks, Robot

Link to comment
Share on other sites

Of course, if you're interpreting the 1/3 and 2/3 probabilities to mean that an individual particles spends 1/3 of its time in the first eigenstate and 2/3 in the second, then the law of large numbers should kick in again and help you recover the correct results (in other words, it seems you would have to use that interpretation to make this match up with experiment). But now you've got strange questions about why these particles exist in a definite state but seem to be on a timer that flips them to in a new one (somehow) at the correct time--and this without any (apparent) operation like measurement acting on them to make it happen. I'm not sure this any more appealing than any of the other viewpoints. More than that, remember in the first paragraph we put a sort of limit on the speed at which a system cycles through the possibilities. Imagine we had a system with more than just two possible states coming out of the measurement--how does a system divvy up the time it spends in each possible state if there are now 10, 100, 1000, etc possible states it must cycle through (but divvying up the time in accordance with the numbers usually interpreted to be probabilities)?

That's pretty much the way I saw it working.

For the sake of simplicity, let's talk marbles.

Take 3 marbles, 2 black and one white, packed tight together and rotating in a single plane. If you took a look at them through a pinhole, 2/3 of the time they would appear black, the other 1/3 white.

In terms of combinations - let's say that black marbles have a positive effect to the overall state measured and white marbles have a negative effect to the overall state measured, dependant on their position. For example, if the white marble was at the back, then the overall state would be at it's highest, and if at the front, then it would be at it's lowest.

Spin it and measure the effect and instead of just two states, you get an analog state transition. Mix in enough different marbles, and you could create some fairly wild and varied patterns, yet all cycling.

Obviously, that's a wildly simplified model. For a start, I'd expect it to be spinning in multiple planes and there's no reason why there shouldn't be 3, 4, 5 or more different "marbles", each with their own effects on the state.

I don't however, have any ideas as to why repeated measurement at short enough intervals would produce the same result, other than the act of measurement stops the model from spinning for a short time, and that raises the question of how, and why does it start spinning again?

*Sighs* - It may not be perfect, but it's that or superposition - and for some reason, like Einstein, it just doesn't feel right to me.

Edited by Tiggs
Added repeated measurement paragraph
Link to comment
Share on other sites

The problem is that what you're describing is a local hidden variable. If you pictured the different possible states as being at spokes of a spinning wheel arranged such that we can only see (measure) one at a time then whatever's analogous to the angular momentum of the spinning wheel and the arrangement of the states on the spokes--that is, the stuff that makes the thing deterministic--are the hidden variable. Bell's theorem is interesting in that it doesn't assume anything about the hidden variable (other than that it's local) or the complexity of the bells and whistles underlying the hidden variables, it just assumes there is some such system in pace. The wiki article on Bell's theorem actually presents a setup not all that different from your simplified spinning marbles scenario:

The following example[3] illustrates and makes the nature of Bell inequalities easy to understand. Consider a particle with a slippery shape property that is either square or round, depending on which way we look at it. The particle cannot be seen from two directions at once, and looking at it changes how it might have looked from other directions. A source creates entangled pairs of these particles, so that if we look at the two from the same angle they have the same shape, and sends them in opposite directions. Shape detectors independent of each other and of the source are placed in the path of each particle and randomly change between three observing angles after the particles are emitted. Because the particles are entangled, the detectors report the same shape every time they happen to measure a pair from the same observation angle. Additionally the detectors measure the same shape for half of all runs when they are set arbitrarily and independently to one of the three angles. This last property does hold for some real systems, and is the key Bell found to show the existence of alocality.

To construct a local model for this situation, we must assume that the information for shape appearance at each angle is carried on the particles. This is the only local way to ensure that the same shape is measured every time the detector angles happen to be the same. We can represent this information by either an s (for square) or r (for round) in each of three slots corresponding to the three detector angles. Remember that we can only observe the shape from one angle at a time, and subsequent measurement will not reflect what the shape would have been if we had observed it from another angle. Thus we can learn only two of the three pieces of information by measurement, one from each particle. The unobserved value in each particle's instruction set is an unknowable, hidden variable. Suppose a pair of entangled particles which would look square from angles 1 and 2 and round from angle 3 each carry the instruction set ssr. For this particular instruction set, there are five possible detector settings which yield the same shape (11,22,33,12,21) and four settings which yield different shapes (13,23,32,31), so with random detector settings, the probability of detecting the same shape given this instruction set is 5/9. There are five more possible instruction sets (rss,srs,rrs,rsr,srr) that also give probability 5/9 for detecting the same shape. The only other possible instruction sets in this local model are rrr and sss, for which the same shape is measured with probability 1. Whatever the distribution of these instruction sets among the entangled pairs, the detectors will measure the same shape in at least 5/9 of all runs. . .

That goes on a little bit to talk about how the inequalities are violated but you get the picture: it's a set of instructions carried by the system that decide what's going to happen we we look at it, similar to your suggestion.

I don't find all of this very appealing either but it seems to be how things are. At least as best we understand now.

Link to comment
Share on other sites

This is a little off topic. Has anyone heard of quantum chemistry S1 and also a new theory called S2 chemistry? And if you have, what is your view of this subject? I am a bit of a novice on quantum theory's but it is to me a very interesting subject

Link to comment
Share on other sites

This is a little off topic. Has anyone heard of quantum chemistry S1 and also a new theory called S2 chemistry? And if you have, what is your view of this subject? I am a bit of a novice on quantum theory's but it is to me a very interesting subject

Nothing much, they happen to be catalog numbers for university courses.

Link to comment
Share on other sites

Nothing much, they happen to be catalog numbers for university courses.

Thanks for your response, but I am more interested in the aspect of S2 chemistry in which deals more with some of the theory's S1 quantum chemistry does not explain.

Link to comment
Share on other sites

exothermic and endothermic reactions can explain alot of this espiecally the faster than light travel... if we can get somthing to move the speed of light using one of these reactions it should speed faster than light.....But in my little known experience I could be wrong

Link to comment
Share on other sites

I wonder why this thread was downrated?

This is a little off topic. Has anyone heard of quantum chemistry S1 and also a new theory called S2 chemistry? And if you have, what is your view of this subject? I am a bit of a novice on quantum theory's but it is to me a very interesting subject

I haven't but chemistry's not really my thing. What is it?

exothermic and endothermic reactions can explain alot of this espiecally the faster than light travel... if we can get somthing to move the speed of light using one of these reactions it should speed faster than light.....But in my little known experience I could be wrong

Exothermic and endothermic just refer to whether or not something releases or absorbs heat when some reaction occurs. I don't see how chemical reactions could lead to faster than light travel.

Link to comment
Share on other sites

I wonder why this thread was downrated?

I haven't but chemistry's not really my thing. What is it?

Exothermic and endothermic just refer to whether or not something releases or absorbs heat when some reaction occurs. I don't see how chemical reactions could lead to faster than light travel.

Im not quite sure either, nor is chemistry my thing. Are you maybe confused Essene and talking about energy states? As in quantum states?

Maybe the excitation of electrons in orbitals? 1s, 2s, 2p etc?

Can you clarify more essene?

Edited by camlax
Link to comment
Share on other sites

Im not quite sure either, nor is chemistry my thing. Are you maybe confused Essene and talking about energy states? As in quantum states?

Maybe the excitation of electrons in orbitals? 1s, 2s, 2p etc?

Can you clarify more essene?

I had been reading in another forum a little joust on occult chemistry and known theory's of quantum chemistry. Pretty interesting and I would love hear any feed back on this theory. By Ron C. <QUOTE>Hello Mike;

Many thanks for inquiring further into this, yes?

As it happens, Occult Chemistry only predicts funnels of ANU. An

Ormusized atom no longer distinguishes between the nucleus and the electron

cloud. In such an atom, the matter distribution will have all sorts of

structures which are made of ANU and are very "delicate".

Any attempts that we know of using ordinary S1 chemistry observation

tools to observe them with electro-optical means destroys the ANU configuration

and the atom reverts to the predictions of quantum chemistry in terms of a

nucleus in the center and some electrons in orbit around that.

Indeed, how does one know that these funnels are "real", right?

At this time, there are only indirect observations via the biological

effects of ORMUS and, of course the clairvoyant observations of Besant as well

as a few pieces of mathematical physics.

In the 1980ies a PhD thesis by Philips at Cambridge University in

England came up with a theory of "quarklets", there being 3 quarklets to make up

one quark and, as usual 3 quarks to make up one proton or 1 electron. Philips

concluded that the hydrogen atom would therefore be made of 9 quarklets for the

proton and 9 quarklets for the electron, for a total of 18 quarlets, which by

then, he assumed to be the ANU that Besant had seen decades earlier.

Philips went on to compute the mass of the atoms of the atoms of the

periodic table and found the same results as Besant, which as you noted, are not

all that good.

More recently, the Tetrahedral Relativity model with its 13-dimensions,

9 more than the regular 4-dimensional space time, has given rise to a 9

dimensional harmonic oscillator model whose quanta would of course be the

quarklets. From mathematical physics, it is known that when one wants to

compute the energy levels, one add a 10th dimension, the energy to the

calculation. Not too surprisingly, this is the SU(10) model that was used by

Philips to compute the mass of the elements, with not so hot results. But then,

one would expect all sorts of perturmations on top of a simple harmonic

oscillator model. So, matters remain inconclusive at this time. Couple all

this with the notion that S2 Chemistry involves two kinds of Gravity, not just

the standard one. This can be taken to mean that Ormusized atoms do not

necessarily have the same mass as S1 atoms. Barry can recount many observed

mass anomalies in ORMUS work, yes?

And of course there is the breath catching feat that some yogi

accomplish which is that of levitation, as if the mass what not really the mass

of the body, oh well!

As you can see, there is not yet a great deal of calculable mathematics

in S2 Chemistry, perhaps because of the difficulties of doing experimental

observations as most of the S2 Chemistry results involve biological phenomena

which are notoriously difficult to control and quantify.

For example, there is a yogi in South East Asia who treats people by

emitting sparks from his fingers. Then there are many documented instances of

Quantum Touch practitioners who can charge dead cell phone batteries by sending

QT energy into them (never physically touching the battery). Also, there is

Danae Harding's sparking water and her personal magnetism that is almost 10

times larger than that of the planet. My Liquefied Barley Grass also sparks

sometimes when I bottle it and my personal magnetism is only twice as large as

that of the planet.

Add to this the troubling known fact that Maxwell's equations fail to

correctly predict the fundamental law of magnetism, called Ampere's law for the

interaction of two currrent elements. These are the very Maxwell equations that

are used in Pauling's quantum chemistry calculation, and you begin to get the

uneasy feeling that maybe, just maybe, there is more than meets the eye that

relies only on quantum chemistry, right?

And, of course, there is the ORMUS phenomenon that has no explanation at

all in terms of Quantum Chemistry and you begin to see that the world is getting

ready for an explosion of new knowledge, yes?<END QUOTE>

Link to comment
Share on other sites

Hi Star - apologies for not responding sooner - I've been rather busy as of late.

With regards to Bell's Theorem - I think that what I'm trying to say is easier explained using Mathematics:

To help, I'm using the Bell's Theorem Simple proof from mtnmath.com.

Consider three properties linked-image, linked-image and linked-image that an object might have. The objects and properties could be anything. For example the objects could be words and the three properties could be whether a word contains the letter `a', `b' or `c'. Another example might be pictures containing the colors red, green and blue. Now consider three categories of objects: linked-image, linked-image and linked-image. Assume we have a collections of objects that are candidates for each category. Denote the number of objects in a category by linked-image etc. The following must hold.

linked-image

It's this fundamental piece of logic which I believe is flawed.

In a world where properties do not change, then I concur that this holds absolutely perfectly true. However, as I stated in my previous post - I don't see why properties of an object would NEED to be static.

Consider the possibility that the properties are not fixed over time , and you'll soon realise that the above logic would no longer hold true.

Let's skip back into English, just to make sure you understand what I'm saying.

We'll take a contrived example, using a room full of glamour models as our objects and give them the following properties:

Wearing Lingerie

Wearing Dresses

Wearing Boots

which are either true or false.

Translating the above, we can say the following:

The number of Models which are wearing Lingerie and are not wearing Dresses + the Models which are wearing Dresses and are not wearing boots is greater than or equal to the number of Models wearing Lingerie and not wearing Boots.

Let's construct a quick table, just to confirm this:

Name

Susan Wearing Lingerie, Dress & Boots

Mary Wearing No Lingerie, No Dress & No Boots

Shiela Wearing Lingerie, No Dress & No Boots

Beatrix Wearing No Lingerie, Dress & Boots

Paula Wearing Lingerie, Dress & No Boots

Bob Wearing No Lingerie, No Dress & Boots

The number of models which are wearing Lingerie or not wearing dresses = Susan + Shiela + Paula + Mary + Bob = 5

The number of models which are wearing dresses or not wearing boots = Susan + Beatrix + Paula + Mary = 4

The number of models which are wearing Lingerie or not wearing boots = Susan + Shiela + Paula + Mary = 4

As the first two counts (5 + 4) 9 is greater than 4, our equation is satisfied.

However you play around with what the models are wearing (or not), the equation will always be satisfied. This is, quite simply, because the first measurement (Lingerie or no dresses) catches all the models wearing Lingerie, whilst the second measurement (Dresses and no boots) catches all the models wearing no boots.

However...

Models, being busy by nature and having to walk the catwalk several times in an evening have a tendency to change what they're wearing.

If the models get changed between counting the number of models which are wearing lingerie or not wearing dresses & counting the number with dresses or no boots...then our equation may no longer hold true.

Imagine the first catwalk involves Dresses and Boots. Our first measurement catches 0 models. They quickly get changed for their second catwalk session, Lingerie and Boots. Our second measurement catches 0 models. Exhausted for the evening, they come back for the after show party and are counted again - this time all 6 are counted, as they're all still wearing lingerie.

Okay. I know it's a long drawn out example, but on the upside I got to post about models wearing lingerie (or not) in a physics thread. The point is, that as two measurements are made, if the properties of the object change between measurements, then the boolean logic underpinning Bell's theorem collapses spectacularly.

Edited by Tiggs
Removed all references to x
Link to comment
Share on other sites

There's much to be said here, I think. You raise an interesting point by bringing time into this but I'm not sure it's exactly in the way you think.

Before we go any further, we should recap what's going on here. It's convenient to think of a version of the EPR thought experiment devised by Bohm. Suppose we've got a neutral pi meson (spin zero) which then decays into an electron and a positron flying off in opposite directions. We set up two detectors, one in the direction the electron is going and the other in the direction the positron is going. When we measure the spin of one we'll either get up (which we'll denote at +1) or down (-1). We know, however, that if the detectors are parallel to each other then when we measure an up (+1) on one, the other will have to yield a down (-1). That is, we don't know which will be which but we know the two detectors have to yield opposites. Now a local hidden variable theory is something that suggests the outcomes are determined in advance. If we assume that this is the case and there is no influence of one on the other after they've separated, we can work out the consequences of this idea.

Bell didn't originally derive his inequality in exactly the same way as the derivation you linked to but he did embrace d'Espagnat's formulation in a famous 1981 talk he gave called "Bertlmann's Socks and the Nature of Reality" (which is fantastic and is the basis for most of what I'm about to say). The Bertlmann's socks analogy is one in which a consumer research organization is worried about whether a sock could survive one thousand washing cycles at 0°C, 45°C, and 90°C. This situation is easily translated into the Wigner-d'Espagnat inequality you quoted in your post. "A" just becomes the number of socks that could pass 0°, "B" just becomes the number that could pass at 45°, and "C" the number that pass at 90°. Bell notes, however, as you undoubtedly did:

But trivialities like this, you will exclaim, are of no interest in consumer research! You are right; we are straining here a little the analogy between consumer research and quantum philosophy. Moreover, you will insist, the statement has no empirical content. There is no way of deciding that a given sock could survive at one temperature and not at another. If it did not survive the first test it would not be available for the second, and even if it did survive the first test it would no longer be new, and subsequent tests would not have the original significance.

If I'm understanding your objection correctly, you're imagining that when we talk about "if one and not the other," etc this implies two tests must be done on one particle and something can happen in the time between those two experiments that renders this all useless. As Bell himself just noted, you couldn't do such a test on particle. First of all, if the "sock" didn't survive the first test, it certainly couldn't be subjected to the second. And even if it did survive the first "wash," it wouldn't be a new sock anymore and the whole thing might be suspect.

The reason the sock analogy is even being used is because socks come in pairs. We make the assumption that both socks in the pair act the same way--that is, if one would survive the conditions of a wash, so would its partner if the test were performed on it. So now instead of talking about a condition A and not B in which a sock could survive a thousand washes at 0°C but not at 45°C, we start using the fact that it exists in a pair and alter what A and B (and C) mean a bit. We now talk about things like "the number of pairs in which one could pass at 0° and the other not at 45°" and so on. If we add in some random sampling we can get probabilities that a sock will pass given a certain condition without ever even thinking of doing more than one test on a single particle. We'll instead do a lot of different tests on a lot of different particles.

It should be pretty clear how the socks analogy related to the real world example. The 0°, 45°, and 90° are of course not temperatures but rather orientations of the detectors we've set up to measure the electron and positron spins. The pair of socks is the entangled electron-positron pair. Here we need to make the adjustment that instead of following the sock rule that if one passes a test so does its mate, we must substitute in the fact that the spins of the particles are going to anticorrelated: if one passes the test, its mate in that situation would surely have failed. We are led to the inequality you posted and see that it is violated. The ultimate moral (not entirely crystal clear here) is that the quantum mechanical calculation for the probabilities of the outcomes of measurements in situations where the electron and positron detectors are oriented toward each other at different angles is not the same as a calculation that assumes the existence of a local hidden variable.

It seems to me your objection was based on a picture of multiple measurements being made on a single particle without proper consideration being given to what happens during the time lag. But there is no such time lag because no one particle is subjected to more than one measurement. But it's still interesting to think about what role time could play in all this. We can change the orientation of the two detectors (and independently of each other, of course) while the electron and positron are in flight, long after they've parted company with each other. But we've still got to get the "right" results (i.e. one spin up and one spin down), regardless of when we measure the particle and what the relative orientations of the detector are (not that we don't have to measure the spins at the same time). So you'd need to figure out a way to make the correlations work right to get the expected results, regardless of what hurdles we throw into the situation.

That said there was an objection concerning time in Bell's theorem brought up a few years ago by a pair of physicists. Let me post the news story on this thrown together by Nature at the time:

A hidden reality?

Proceedings of the National Academy of Sciences USA 98, 14224–14227 and 14228–14233 (4 December 2001)

Albert Einstein famously disputed with Niels Bohr over the interpretation of quantum theory. Unwilling to accept probabilistic quantum indeterminacy as the ultimate nature of things, Einstein argued instead that there might be a further, hidden layer to reality in which variables became precisely defined again. But experiments since the 1980s have seemed to rule out Einstein's 'hidden variables'.

Now Karl Hess and Walter Philipp have shown that the notion of hidden variables is not defunct after all. Admittedly, such variables, if they exist, must be rather more subtle than those Einstein might have envisaged — but the case is not yet closed.

The arguments hinge on the thought experiment concocted in 1935 by Einstein together with Boris Podolsky and Nathan Rosen. They showed how the canonical Copenhagen interpretation of quantum theory led to what seemed like an implausible conclusion: that a measurement made on one particle instantaneously determines the properties of another particle, no matter how great the distance between them. Einstein regarded this 'spooky' action at a distance as unacceptable — so if quantum mechanics demanded it, quantum mechanics must be incomplete.

This 'EPR experiment' became possible to perform in the 1980s, allowing Einstein's objections to be put to the test. Crucially, John Bell had shown in the 1960s that the existence of 'hidden variables' demanded that certain inequalities between measurable parameters be satisfied. Because the experimental EPR results did not satisfy Bell's inequalities, this seemed to eliminate the possibility of hidden variables.

Hess and Philipp have found a loophole in Bell's theorem which means that the existence of hidden variables can still be reconciled with the results of EPR experiments. They argue that Bell made certain assumptions about what Einstein's hypothetical variables would behave like. These assumptions need not be valid, in which case Bell overlooked a large class of possible hidden variables whose behaviour is consistent with the existing experimental findings.

Specifically, they find that if hidden variables are time-dependent and time-correlated, Bell's theory breaks down. Bell assumed that the joint conditional probability densities of a set of experimental outcomes are equal to the product of the individual conditional densities. With time-correlated variables, this is no longer the case.

The authors show that the results of EPR experiments can be explained with hidden-variable theories of this nature that invoke no 'spooky' action at a distance. This does not mean that hidden variables exist, of course — just that they cannot be so confidently ruled out.

Hess and Philipp themselves actually wrote this in their paper on the subject:

The proof of eq. (4)
[bell's inequality]
involves a number of definitions and considerations that Bell has discussed in his book [10] and in his last paper [11]. They have been derived involving well-known arguments of relativity by use of light cones. However, there are two additional assumptions that Bell uses and that appear in all proofs of Bell-type inequalities. Bell assumes that ρ(λ) is the same over any run of experiments and that the conditional probabilities (given λ and settings) that A, B assume a certain value are also the same. These assumptions permit then certain factorizations that are vital for all proofs (see, e.g., p. 56 of ref. [10]). We argue below that these assumptions exclude time from the parameter space. The Bell theorem thus describes only time-independent processes. We believe that this greatly restricts the relevance of the Bell inequalities for EPR experiments and like to emphasize the following. The instrument settings need to be changed randomly during a run of measurements and given settings need to occur at random times in order to avoid certain well-known “loopholes” [3, 4] of Bell’s reasoning. The proof of Bell’s inequalities, however, is entirely silent about any role of time and the necessity for given settings to occur at random times. It does not exclude loopholes by putting restrictions on time-like correlations between settings and parameters. This clearly is inconsistent with the claim that Bell’s space of hidden parameters is entirely general. If it were and included time-related parameters, then it would need to restrict these parameters to exclude the well-known loopholes [3]. As we will see momentarily, the proofs ala Bell exclude time and time dependencies altogether.

Here λ refers to the hidden variable (i.e. the unknown scheme by which the results of a measurement are determined) and ρ(λ) represents a probability density of that hidden variable.

However, their arguments seem to have been taken apart by a few different authors. To quote one of the main objections:

Secondly, we did not mention time in our derivation
[of Bell's theorem]
at all because it was completely irrelevant. Our derivation concerned each run of the experiment. We did not compare
actual
outcomes under different settings at
different
times, but
potential
outcomes under different settings at the
same
time. Therefore, the argument in (1) Eqs [8] and [9], or in (2), end of the paragraph following Eq. [11], is completely beside the point.

To emphasize this point, consider (as a thought experiment) repeating the measurement procedure just described, not as a sequence of successive repetitions at the same locations, but in a million laboratories all over the galaxy. The prediction of local realism is that when we collect the one million sets of observed quadruples (A, B, X, Y ) together and compute four relative frequencies estimating the four conditional probabilities Pr{X =Y | AB = i j}, they will satisfy (up to statistical error) Bell’s inequality. It is of no importance that the distribution of hidden variables at different locations of the experiment might vary.

Hess and Philipp make a large number of criticisms of the assumptions of Bell’s theorem with the main theme being that variables at both locations can vary in time in a dependent way, leading to dependence between the outcomes, which Bell supposedly did not take account of. Before turning to their model, we confront our formalization of the metaphysical assumptions of local realism with the idea of time variation.

How could time variation invalidate the freedom assumption? One would have to argue that because of systematic long-time periodicities in the various component physical systems concerned, the outcomes of a complex series of events involving a card shuffle, a coin toss and the free will of an experimenter at one location are interdependent and highly correlated with the potential outcome of a certain polarization measurement at a distant location. A good experimental design, with rigorous randomization of the choice of settings, makes this totally implausible.

Link to comment
Share on other sites

Once again, Startraveler, my thanks. You've explained that beautifully.

I am, however, still a little confused.

Let me just confirm what I think the position is using the entangled positron/electron pair as an example:

We have two detectors, that have three positions: 0 degrees, 45 degrees up and 45 degrees down.

We produce a very large sample of entangled positron / electron pairs.

When the two detectors are set to 0 degrees [a], we have a perfect correlation (100%) between the spin of the two pairs, one positive and one negative.

When the detectors are 45 degrees apart from each other , they correlate 71% of the time.

When the detectors are 90 degrees apart from one another [c], they correlate 50% of the time.

substituting in the values into the equation, we get:

[A or not B] = 100 + 71 >= [A or not C] 100

which doesn't seem to violate the inequality. What am I missing?

Edited by Tiggs
Brackets and b's and c's = Smileytastic
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.