lördag 28 maj 2011

Interpretation about Copenhagen interpretation.

Lubos has turned his coat? He looks at the Copenhagen Interpretation, and describes the "Lumo interpretation". This time he finds the same solution as Matti has found long ago. Light-like CD cones. Personal ones. And he looks at paranormal phenomena. Amazing!

I cannot hold back a small comment:
Lubos just realized the fact that selves has their own lightcones of reality and perceptions. So consciousness is different too, and the observer-effect. The collapse gives the information and is a result of the measurement. Information is not material, not bound to material patterns etc. So what is it?

The contenta is that he discovered the Zero Energy Ontology :) Maybe also the p-adic algebra :) Isn't that ironic?

Looks like he finally has began to THINK. I am sure he is good in the thinking process.

If he only stopped insulting people.
Ulla.


But I was wrong.
Lubos did not start thinking.
But I am of course no physicist, remember that Lubos. I have only studied this for some years.

Earlier he looked at the absent dark matter. Ironically TGD has been accused exactly because Matti talks of dark matter much. But what does the dark matter mean? Has it to be exotic? I guess no. So both are right? In fact, Kea also talks of the absense of dark matter. She can explain what it is.

Something Lubos has to do to save his face!

Has he become a real crackpot? He talks himself much of insane crackpots. I hate that word. It has done so much harm. So many carriers spoiled. For what? A fantasy? There is no M-theory, and maybe Lubos has been forced to realize that at last. So he goes back to see where it went wrong.

I am sorry, but this is creating more fuzziness than it solves. Lubos has much left to learn, and he must leave his stubborn belief and start look at realities and experiments in biology, condensed matter and quantum computer theory. I see not much progress in this, just the same old story.


To his text:

Pretty much everyone misunderstands the basic points of the Copenhagen interpretation, Lubos claim. I believe him.
There was no fundamental disagreement about the meaning of quantum mechanics among those people. Obviously, many other people such as Albert Einstein, Erwin Schrödinger, or Louis de Broglie didn't ever accept the Copenhagen interpretation but they didn't have any alternative.

Lots of fringe stuff, garbage, and crackpottery was later written by various people who weren't really part of the Copenhagen school of thought but who found it convenient to abuse the famous brand. That's why one can also hear that the Copenhagen school may (or even must) interpret the wave function as a real wave that collapses much like a skyscraper when it's hit by an aircraft on 9/11.


But nothing like that has ever been a part of the Copenhagen school of thought. If you open any complete enough description of the Copenhagen interpretation or if you look at Bohr's or Heisenberg's own texts, you will invariably see something like the following six principles:
  1. A system is completely described by a wave function ψ, representing an observer's subjective knowledge of the system. (Heisenberg)
  2. The description of nature is essentially probabilistic, with the probability of an event related to the square of the amplitude of the wave function related to it. (The Born rule, after Max Born)
  3. It is not possible to know the value of all the properties of the system at the same time; those properties that are not known with precision must be described by probabilities. (Heisenberg's uncertainty principle)
  4. Matter exhibits a wave–particle duality. An experiment can show the particle-like properties of matter, or the wave-like properties; in some experiments both of these complementary viewpoints must be invoked to explain the results, according to the complementarity principle of Niels Bohr.
  5. Measuring devices are essentially classical devices, and measure only classical properties such as position and momentum.
  6. The quantum mechanical description of large systems will closely approximate the classical description. (The correspondence principle of Bohr and Heisenberg)
Note that the very first point says that the wave function is a collection of numbers describing subjective knowledge. That doesn't mean that in practice, everything will be always subjective - or whatever the spiritual people have attributed to quantum mechanics. Of course that constant interactions between parts of the world - and different people - pretty much guarantee that they have to agree about many "objective properties". But as a matter of principle, this rule is important for quantum mechanics and Werner Heisenberg has never left any doubts that this is how one had to interpret it.

Heisenberg would often describe his interpretation of the wave function using a story about a guy who fled the city and we don't know where he is but when they tell us at the airport they saw him 10 minutes ago, our wave function describing his position immediately collapses to a smaller volume, and so on. This "collapse" may occur faster than light because no real object is "collapsing": it's just a state of our knowledge in our brain.

In practice, everyone can use pretty much the same wave function. But in principle, the wave function is subjective. .. . So A and B will have different wave functions during much of the experiment. It's consistent for B to imagine that A had seen a well-defined property of S before it was measured by B - but B won't increase his knowledge in any way by this assumption, so it is useless. If he applied this "collapsed" assumption to purely coherent quantum systems, he would obtain totally wrong predictions. So the wave function is surely subjective if one wants to obtain a universal description of the world.
This is the problem between SR and GR, the quantum gravity. How can a subjective Universe become part in an objective Universe? A collection of probability amplitudes combined differently, says Lubos, before the squared 'absolute values' of the combinations are interpreted as probabilities. This is the essence. A superposition can remain as a probability amplitude if they are not measured, squared, laid on-shell. The Schrödinger cat can be both living and dead at the same time. The cat can have the superpositions in the entangled quantum wavestate and some probabilities can be measured by him or someone else that is also entangled. This has been shownin quantum computer science. It is possible to know 'more than everything'. Then Lubos continues:
All probabilities of physically meaningful events may be calculated in this way - as the squared absolute value of some linear combination of the probability amplitudes.
This is false. All probabilities cannot be squared, only those that become 'subjective' in 'reality'. The rest continues unmeasured.

The notion of an objective collapse was introduced by John von Neumann in 1932 and he was clearly not a part of the Copenhagen school of thought, says Lubos. So no objective measurement is possible. The measurement is always subjective, and everything that happens with the wave function has to be subjective as well.
The laws of physics predict that with the state above, there is a 36% probability that we will measure the cat to be alive and 64% probability that it is dead. Just to be sure, there is a 0% probability that there will be both an alive cat and a dead cat.
Confusions between probability amplitudes and actual measurements, says Lubos, makes people say the cat is both living and dead. Well, in a way he is right, and at the same time it is like an scientist measuring utterly complex things summaristically. The state of being, ontology, both life and death is something we don't know what it is. If we take something more simple like color? Does it change the outcome? But color is an incoming qualia and also impossible today to say exactly what it is. Bodylength vary with as much as 7 cm depending on position and momentum. Weight vary much more. Is there anything that is stable? Maybe a chrystal? But then there are very few probability amplitudes.... Not even ordinary matter is stable.
So the measurement must be done for the momentum and position, in this time alone. Next time the situation vary. So the outcome is influenced by something else too? An oscillation coming from ?
Only one of the options with the nonzero entries, "dead" or "alive", will occur, and the probabilities are 64% and 36%, respectively.
How can those probabilities be measured? They are only probability amplitudes. The outcome may be a Bell curve? But in what form? Today we simply cannot answer. This must be made by assumptions only.
There is no possible answer of the form "half-dead, half-alive"
I think many cats can be half-dead. Old ones that barely function, new ones that not yet are developed. When is a cat 'almost dead' or dead. For people there are a big debate about this. What is 100% alive? 100% health? This is a bad joke.
The Hamiltonian evolves the density matrix and dictates which states are "observable" in the classical sense. It's very clear that once we learn that the cat is alive, even though the chance was just 36%, the number 64% has to be replaced by 0% while 36% jumps to 100%.
Ooops! What is this? Ad hoc! The 'collapse'! Why not try superposition of states? No collapse happen. The cat can be even 98% dead :)
It makes no sense to claim that it's "predetermined" that the cat would be seen as alive. The free-will theorem, among other, morally equivalent results, shows that the actual decision whether the cat is seen alive or dead has to be made at the very point of the spacetime where the event (measurement) takes place; it can't be a functional of the data (any data) in the past light cone.
This is the problem of the quantum jump, discussed by Matti. Is the jump always straight forward, then there is no change, no probability amplitude. What can invoke on the desition? Certainly the history very much, the habit, attitudes, old thinking. "It has always been like this". This was realized by Einstein. In his black hole thinking it was the environment (gravity?) that 'collapsed' the Minkowskian light cone. Free will must be thought as a desition to jump somewhere not computated in advance? What can do that? A surplus of energy? This question cannot be reduced to this 'measurement' here and now. In a comment Lubos say: the random aspect of any event that is predicted with a different probability than certainty - different from 0% and different from 100% - is decided at the very point of the spacetime where this event or measurement takes place.

If you have a perfect memory, then quantum mechanics predicts that you will remember all your past observations accurately with probability 100%. Because it's 100%, no "random generator" is needed.

The claim I am making - and Conway and Kochen are making - doesn't mean that the present has no relationship to the past. It says that the information that is produced by random outcomes now - and recall that the information is given by

-sum p_i ln(p_i)

so it vanishes if all p_i are equal to 0 or 1 - doesn't have any relationship to the past. It is literally random and decided right now, not as a function of any "hidden variables" that were inherited from the past. This is proved by the free-will theorem. However, the *probabilities* that one gets one thing or another *are* of course calculated from the knowledge about the state in the past, from the initial wave function if you wish. If all the probabilities are 0% or 100%, then the measurement produces no information because it's guaranteed in advance, so the question where this (empty) information came from is vacuous. Well, I am not convinced. A computation is a measurement.
Even more importantly, people often say that "rho = rho1 + rho2" decomposition of the density matrix means that there are "two worlds", one that is described by "rho1" AND one that is described by "rho2". (Similarly for "psi", but for "rho", the comments are more clear.) But this is a complete misunderstanding of what the density matrix and addition means. The density matrix is an operator encoding probabilities - its eigenvalues are predicted probabilities. And we're just adding probabilities, not potatoes.
And he continues on the theme. No, I can't agree. Superposition of states do not have to collapse. There can be and there are two worlds. Or three worlds, one of the 'subject' (observer) one of the 'object' (cat) and one of the probabilities (quantum world). Exactly WHAT is manifesting itself, and why? Note that the quantum world is not interactive with the cat, usually, only through windows. And to measure one character momentarily doesn't mean the whole entangled cat vanish, nor the probabilities vanish, only that the subject gain massless information. This is just rubbish. Look: "It is not possible to know the value of all the properties of the system at the same time; those properties that are not known with precision must be described by probabilities. (Heisenberg's uncertainty principle)" The Universe is digital.

The "Lumo interpretation" clearly:
you describe the system by a density matrix evolving according to the right equation and it is always legitimate to imagine that the world collapsed to an eigenstate of the density matrix and the probabilities of different eigenstates are given by the corresponding eigenvalues of the density matrix. Incidentally, this also works for pure states for which "rho = psi.psi". In that case, "psi" is the only eigenstate of "rho" with the eigenvalue of "1", so you may collapse into "psi" with 100% probability which leaves "rho" completely unchanged. ;-) The only illegitimate thing to imagine is that the world has collapsed into a state which is not an eigenstate of "rho".
Uncertainty principle.
It means that generic pairs of properties,- but pairs of projection operators describing various Yes/No properties of a system - can't have well-defined values at the same moment. Pairs are usually oscillating, as instance solitons. Only (magnetic) monopoles can be 'singular'. What about the possible Higgs boson? Can it be a monopole?
Especially John von Neumann, before he began to say silly things, liked to emphasize that the nonzero commutators and the Heisenberg uncertainty principle is the actual main difference between classical physics and quantum physics. If the commutators were zero, the evolution of the density matrix would be equivalent to the evolution of the classical probabilistic distribution on the phase space.
What is this! A measurement without observer? A dead Universe?
Because the projection operators P and Q corresponding to two Yes/No questions about a physical system typically don't commute with one another...
A non-commutative digital Universe? Made of pairs Yes/No properties? But there must be some correlation? Some entanglement, both cannot be Yes. And two questions can also be dependent on each other. Ok, typically...
This main principle is the main "underlying reason" why the GHZM experiment or Hardy's experiment produce results that are totally incompatible with the classical or pre-classical reasoning. The classical reasoning is wrong, the quantum reasoning is right - and the nonzero commutators in the real world are the main reason why the classical reasoning can't agree with the observations.
Complementarity principle.
Bohr's favorite principle shows that the systems exhibit both particle-like and wave-like properties, but the more clearly you can observe the latter, the more obscure has to be the latter, and vice versa.
Shows no collapse! The wave-like patterns can be described as invisible fields? I remember Lubos disliked the work of the Zeilinger-group in Wien. They have clearly shown the duality and that both can coexist in an oscillation; expanding-compressing space/time.

The measurement devices follow the rules of classical physics. This is another source of misunderstandings, says Lubos. The apparatus behaves as a classical object. So in particular, you may assume that these classical objects - especially your brain, but you don't have to go up to your brain - won't ever evolve into unnatural superpositions of macroscopically distinct states. This was a point in which the Copenhagen interpretation was incomplete. They didn't quite understood decoherence, he claims.
Decoherence shows that the states of macroscopic (or otherwise classical-like) objects whose probabilities are well-defined [does he mean squared, on-shell?] are exactly those that we could identify with the "classical states" - they're eigenstates of the density matrix. The corresponding eigenvalues - diagonal entries of the density matrix in the right basis - are the predicted probabilities.
How can probabilities be well-defined? By the non-existent collapse? No!
However, they were saying that there is such a boundary at which the quantum subtleties may be forgotten for certain purposes and they were damn right. There is such a (fuzzy) boundary and we may calculate it with the decoherence calculus today. The loss of the information about the relative phase of the probability amplitudes between several basis vectors is the only new "thing" that occurs near the boundary.

The boundary is always fuzzy because decoherence is self-evidently a continuous process in which the off-diagonal elements quickly (at a certain time scale) but gradually drop to zero. (in comments)
This is only speculations. On the contrary the fuzziness problem of boundaries points to QM-effects (as instance quantum tunneling, sum over paths).
when the history contains a large collection of decohered outcomes, all observers may ultimately reconstruct the same macroscopic past. It's just the "intermediate state of affairs" before the individual events that are "fuzzy" - in Feynman's approach, one has to sum over all histories how to get from A to B so everything that happens between A and B are just "intermediate results" that don't have any objective properties.
But if the intermediate states gets inputs from outside? Information loss or gain? Look at photosynthesis! There are much happening in the intermediate states. How lock them into a black box? Topology/memory? Entropy?
A commenter, Illuminated:Nature is inherently quantum. But human language and thoughts have a classical structure despite the fact that they emerge from an underlying quantum substrate. So, to describe nature using language, equations and thoughts forces us to come up with a classical description of something which is inherently quantum. From a classical point of view, we have to treat the quantum nature as some black box oracle which takes in classical inputs and spits out classical outputs. If we insist upon coming up with a classical model of something which is inherently quantum, that is like constructing mechanical models of the electromagnetic aether. This leaves us with extremely inefficient classical auxiliary constructs like wave functions or density matrices which are exponentially long vectors, or a functional integral over some extremely large functional space of paths. Asking about the actual value of a particular wave function component, or the nature of the functional space of all paths is like asking about the cogs and gears making up the electromagnetic aether.
If we wish to calculate, until the day practical quantum computer simulations become available, we have to resort to classical equations. That's why we have to resort to wave functions, density matrices and path integrals. But otherwise, we should treat nature as a black box oracle. What do you do with black boxes? You only consider the inputs and outputs.

What the "subjective" nature of wave function means that wave function is really a gadget that collects all the information about the system that may be known to an observer, and that may be used to predict the future. The only subjective thing about it is that at different points of space and time, i.e. at locations of different observers, different things about the past are known and used as the input - initial wave function - while the rest is to be calculated probabilistically.
Predict is to computate probable outcome and project them into the future. Usually we use information in climps, analogies. So there are much implicit information in our predictions. Note the personal light-like CD a la TGD.
In a comment: When you say that the state is "psi", it means that "psi" is what you get by time evolution from all the most recent past data that you could have measured or otherwise learned.

Another observer CD who studies you may, however, use a different state vector such as 0.6 psi1 + 0.8 psi2 where psi1 is the state in which you hold a particle apparently in state psi1 and say that it's in state psi1, and another state where you hold a particle in state psi2 and prepare to do the calculation based on this assumption that it's in state psi2.

The observer CD may only decide that psi1 is correct and psi2 is wrong *after* he actually measures the combined system of you and the particle. Before he actually gets this knowledge, he will be using a different function than you. The wave function that a given observer is using depends on what he already knows about the outcomes that have taken place. This is the sense in which it's subjective.


Brain gets its information in state reduction process, or decoherence. 40Hz state reveals the much work done, suddenly 'collapsed' into 7Hz when the information is gained/selected. But much information (as probabilities) are rejected at the same time. This could also be compared to a 'sum over paths' ? Thanks for this. But this 'observer' CD is from TGD.

For more about the links between decoherence and the second law, see e.g. http://arxiv.org/abs/gr-qc/9402006

In comments: Even a geometrically small brain composed out of a small number of atoms will be able to "perceive" almost perfectly decohered outcomes - one may increase the amount of decoherence by extending the period of time in which the decoherence is taking place. However, again, if the system is small and the time is short so that the decoherence is very far from perfect, the objects shouldn't be imagined to observe sharp outcomes of any measurements. Such objects have to be treated as coherent quantum objects that don't perceive anything and that always have a chance to "forget" what they already learned.

In some sense, it's analogous to virtual particles. Virtual particles may momentarily violate inequalities for the energy, or tunnel through a barrier, and so on, as long as dE.dt doesn't exceed hbar, the mandatory uncertainty limit. Similarly, objects that are too small and don't perfectly decohere may "perceive" some outcome, but their perception may be undone in the future. This is the quantum realm. Sharp perceptions of outcomes that hold and that can't be revised only occur in the classical limit where the decoherence is perfect and completed. In the real world, it's never quite perfect, but because rho(A1,A2) goes like exp(-exp(t)), one gets very quickly extremely close to the ideal limit.


I think of dreaming state contra waked state. Lubos: All the imperfectly decohered perceptions could be subject to revisions - some of them would be illusions. - if you were just a generic microscopic system that has no chance to have macroscopic perceptions, the observer would have to describe you properly in the QM framework using a wave function that never collapses. This is the superorganisms, the bacterias as whole cells, not genomes. Or cells in the body, mitochondrias? What determines the quantum:classic measurements? Not size alone! Also the complexity, networks, expansion- compression. Rigidity:flexibility? Einstein talked of the inertia.
Lea Luke: Concerning the abstract "non-reality" of the complex vector spaces (Hilbert spaces?) themselves: I read that they are real in the sense that they carry momentum and energy even when no particles are there. Lubos answer: it has a relation to classical logic - rules for adding and multiplying probabilities in classical physics. Whenever the questions whose probability we calculate by QM are correctly formulated, the probabilities follow all the classical laws. P(A or B) is equal to P(A)+P(B)-P(A and B), P(U and V) = P(U) P(V) if U,V are independent, and so on, and so on. But the new thing is that in quantum mechanics, probabilities themselves are "composite", they're of the form

P(A) = sum (psi_c psi*_d A_cd)

where A_cd is the matrix representing the projection operator associated with property A. Note that the probability P is schematically "psi squared" - it's made of something more fundamental, something that is complex and may interfere.

The other, often emphasized, difference is that in quantum mechanics, you may *only* talk about the classical probabilities of properties that have already decohered from their alternatives. There is no probability of "intermediate" properties of a system prior to the measurement that would obey the classical laws - there are only wave functions, the more fundamental complex probability amplitudes, and these amplitudes are being added - before they're squared at the moment of the measurement.


Oh, my dear God. Would this psi squared be BETTER than p-adics and primes? These guys REFUSE to discuss with Matti??? I cannot believe my eyes. WHAT are they doing?

Correspondence principle
Both Bohr and Heisenberg also emphasized the correspondence principle, e.g. that the quantum equations reduce to the classical ones in the appropriate limit. If you study e.g. the evolution of the expectation value of "x" and "p", they will evolve according to the classical equations. It's the Ehrenfest theorem.

As discussed in the previous point, it is not enough to show that the world of classical perceptions will occur in the limit as well: we also need to know that the right "basis" will become relevant in the classical limit. Nevertheless, with the extra additions that had been demonstrated in recent decades, I mean decoherence, we know that it is true that the classical perception and choice of states does occur in the appropriate classical limit.
The holographic principle? Quantum world manifest on a surface (2-D, that is spacetime sheets)? What are the rules? This seems nonsense to me. Sorry Lubos.
This boundary doesn't mean that quantum mechanical laws ever break down. They never break down. What's true is that for large enough systems, one may use - and should use - the approximate classical scheme (the word "approximate" sounds too scary but in reality, these approximations are super excellent for all practical and even most of the impractical purposes) of asking questions because one may show that it becomes legitimate.
Comment by pbfred: Einstein forgot about the disadvantages of a principle theory and developed General Relativity which, if not a principle theory, builds on the principles of special relativity which cannot be visualized. This set the stage for Quantum Mechanics which itself is nonvisualizable and base on a good many principles . The biggest one is the Uncertainty Principle. Another is the principle of Wave-Particle Duality. The Pauli Exclusion Principle is just an algorithm that for some reason seems to work. Pauli admits in 1945 that he could find no reason as to why it worked. Of course, this applies to the Uncertainity Principle for it just happens to work--but why does it work?

In the comments from Lubos, a comment about Einstein worshiping:
Einstein has done great things. When I was a high school student, I read his Mein Weltbild obsessively and about 5 times. He was my God. And I did fundamentalistically believe that realism had to be correct up to some point, too. Then I tried to explain the simple patterns in atomic physics etc. and I had to be stealing pieces of quantum mechanics to explain it in my "would-be" realist framework. Of course, at the end, I had to rediscover/steal all of quantum mechanics.

When I just went through all the evidence and the critical properties and experiments, it became manifest that despite the "intuitive naturalness" of Einstein's expectations, he was just wrong. Physics is not about uncritical worshiping of cults. Of course, as time continued, it became clear to me that Einstein was fundamentally wrong about many things. It's not just quantum mechanics. It was his expectation that the strong force and the weak force (nuclear forces) would ultimately go away and become non-fundamental, and so on.

Of course when I was 15, but no longer at 16, I shared those beliefs. But these beliefs are just wrong. Despite his unusual and revolutionary contributions in a certain era, Einstein just wasn't able to stay in the "actual" top of physics after the mid 1920s, despite the fact that he of course remained the most popular and appreciated guy with the media. But science has made lots of progress after 1916 when GR was being completed. In my view, the quantum revolution was more profound than the relativistic one. It wasn't done by one person only - like Einstein's relativity - but the total importance was bigger. One must separate personal appraisals and emotions from science. It's just true that Einstein, despite his giant contributions and likability, wasn't at the top of the physics research since the 1920s. The idea that revolutionaries lose some flexibility to understand important new developments is much more universal. Think about Dirac vs renormalization, Feynman vs strings, and many many other examples.

If someone can't accept such things given the evidence, he's doing cult worshiping and not science. Science doesn't have its infallible prophets. It just doesn't work in this way. So I find the very focus in your comment on exciting emotions about Einstein something that shouldn't be done if we are supposed to talk about science. It's emotional harrassment, not science. Einstein was wrong whether it sounds popular or not.


Brian: "
reading this blog is sort of like a physics PhD's version of right-wing talk radio." Ye, why should we? Lubos will never change.

8 kommentarer:

  1. Ulla,

    If we observe Lubos do we change him?

    If we try to view him from a certain reference frame does that change what we see of him?

    If we pass by Lubos at the velocity of light in some sense is he rotated so we only see the other side of the object from where he was?

    If Lubos is entangled with us and beyond the line of sight do we remain entangled with him?

    As I posted, a little annoyed with Lubos position in a comment to him- taking the high road really- a quote from Einstein that Politics is temporary but an equation is forever..."

    ThePeSla

    SvaraRadera
  2. In a comment to me,I quote:

    I read the article in your link. The first thing I noticed is that all of the fundamental principles were discovered in the 1930's. It seems odd to me that the old arguments have continue so long and not been resolved. Maybe science took a wrong turn and ignored the things that will bring resolution. Where is all of this leading.

    Physics has become comfortable with improvable theories that meet the mathematical requirements. You should make them uncomfortable with a biological reality. It's time to move beyond the 1930s.


    This is just so OLD. Have they really not come any further sinse Einstein? And the Copenhagen Interpretation. What does this tell us? That there are some wrong interpretations that are a hinder. We must find out what.

    In this sense TGD is a good lead, lightyears in front of this silly discussion.

    The cat in the black box could be the Higgs. Is he there or not? What can make him disappear? Is he disappearing for every observer or just for the subjective observer. A secret in the box :) If you tell there is no secret anymore? Is it truth or a lie?

    What is the black box?

    Lubos is just dimming this discussion.

    SvaraRadera
  3. I recall reading a book around 1966, Thirty Years that Shook Physics.

    It said that after 1930 or so there really was not much progress made, that the theoreticians were stuck. I suppose more subatomic research and the string theory looked like a great breakthrough for awhile.

    But do we not still come up against the logic of all this, the same OLD logic in new applications?

    If God is the ultimate observer, and not a subjective being, and the laws of physics very wide with lesser subjects in them or not, Could God make the ultimate multiverse of Lubosi disappear?

    This gets a little fuzzy as we approach the surface of some gravitational object and try to divine its hologram and so in self analysis find the light shifting and dimming to ourselves or others- which is truth?

    In the Revelations it states that "at the end times we hath no need of candles for God supplies the light, and the mystery (secret) of God will be finished."

    Where does the candlelight ultimate vanish as it goes out into the expanse of time and space? How many light years is that?

    Hmmmm- sorry I got the quotes and the comment on them (en)tangled up. But oddly my replies above seem to fit.

    One point- the biological view, as a philosophy or even counter to some religions, some think a lesser philosophy, came way before the quantum discoveries.

    ThePeSla

    SvaraRadera
  4. "If God is the ultimate observer, and not a subjective being" - the essence was that there are no objective observer,- so 'God' must be subjective. But what is he? He is no person, no human, but maybe some kind of dark 'organism'. I would like to say he is quantum entanglement itself :) So dark matter can do measurements and 'observe'. It must not necessarily be living organisms. In the example with Higgs it was the particle that did the measurements first (they try to get the measurement with adding energy), and later a human did measurements. In order for the wavefunction to 'collapse' there must be an interference. Lubos tells in no way what that interference might be (psi?).

    I think the Revelation very well can tell the truth, but with words that we don't recognize. It is a possible scenario. The candlelight vanish into matter. So in reality we are the dark aspect, and the dark matter are the candlelight. Humans like to change things into its inverse for some reason :)

    I don't want Lubos to disappear, because it is good with different opinions. I want him to have a dialog also outside the 'M-theory'. It is reasonable that he, if he tells something is worth nothing, that he also tells WHY he thinks so. And I want him to stop insulting people.

    In fact, Lubos acts very childish and emotionally. Maybe bad genes? But I don't believe in genes, that they have that profound action. Maybe the Nurture aspect? I don't know. Anyhow he ought to be able to behave.

    And I must say I doubt philosophy can give us the answer. I like philosophy very much, but it serves more the ability to look from different viewpoints, also that very much needed. Philosophy should be back in school.

    SvaraRadera
  5. http://physicsworld.com/cws/article/indepth/46155

    Now, using a technique known as “weak measurement” Steinberg and his team say they have managed to accurately measure both position and momentum of single photons in a two-slit interferometer experiment. The work was inspired by one of Steinberg’s colleagues, Howard Wiseman of Griffith University, Australia, who proposed in 2007 that it may be possible to use weak measurements (Yakir Aharonov) to determine momenta and positions in the double slit experiment. Steinberg was immediately fascinated and began to see how this would become experimentally viable.
    The theory states that it is possible to “weakly” measure a system and so gain some information about one property without appreciably disturbing the complimentary property and so the future evolution of the entire system. Though the information obtained for each measurement is minimal, an average of multiple measurements gives an accurate estimation of the measurement of the property without distorting its final outcome.

    In their experiment, the researchers sent an ensemble of single photons through a two-slit interferometer and performed a weak measurement so as to imprecisely measure the momentum of each photon. This was done using a piece of calcite, which serves as a polarizer. Depending on the direction of propagation each photon is differently polarized and the direction is measured as a function of position. This was then followed by an extremely accurate measurement of the final position of where each photon hits the “screen”, which in their case, was a camera. By combining the positions measured imprecisely at multiple points and the momentum precisely measured at the end for each photon, they were able to accurately construct an entire flow pattern for the photons.

    “This weak momentum measurement does not appreciably disturb the system, and interference is still observed. Both measurements had to be repeated on a large ensemble of particles in order to gain enough information for the whole system, but we did not disturb the outcome at all.” explains Steinberg. "Our measured trajectories are consistent, as Wiseman had predicted, with the realistic but unconventional interpretation of quantum mechanics of such influential thinkers as David Bohm and Louis de Broglie,"

    The single photons they used in the experiment were emitted by a liquid helium-cooled InGaAs quantum dot that is optically pumped by a laser, specially developed at the National Institute for Standards and Technology in Colorado, US. The dot then emitted single photons at a wavelength of 943 nm.

    Past, present and future

    The double slit experiment heavily influenced the principle of complementarity devised by Niels Bohr. Complementarity states that observing complementary variables, such as the particle-like trajectories and the wave-like interference in the double-slit experiment, depends on the type of measurement made – the system cannot behave as both a particle and wave simultaneously. Steinberg's recent experiment suggests this doesn't have to be the case – the system can behave as both.

    So would either Einstein or Bohr be pleased or surprised that this seemingly impossible measurement has been made? “Well, I don’t think that Einstein would be surprised at all! But at the same time I do not think that this would make him more comfortable with the quantum mechanics of single systems” says Steinberg, explaining that Einstein was eager to accurately measure all the parameters of a single quantum system, something that we are not capable of just yet. “Bohr is a different matter… I doubt that his contemporaries even understood exactly what he was trying to say” says Steinberg. “But maybe this measurement would make him [Bohr] slightly more careful with his language while talking about complementarity.”

    SvaraRadera
  6. from:


    http://www.sciencemag.org/content/332/6034/1170.abstract
    A consequence of the quantum mechanical uncertainty principle is that one may not discuss the path or “trajectory” that a quantum particle takes, because any measurement of position irrevocably disturbs the momentum, and vice versa. Using weak measurements, however, it is possible to operationally define a set of trajectories for an ensemble of quantum particles. We sent single photons emitted by a quantum dot through a double-slit interferometer and reconstructed these trajectories by performing a weak measurement of the photon momentum, postselected according to the result of a strong measurement of photon position in a series of planes. The results provide an observationally grounded description of the propagation of subensembles of quantum particles in a two-slit interferometer.

    SvaraRadera
  7. http://www.empiricalzeal.com/2011/06/10/why-a-quantum-particle-is-not-like-a-water-drop-a-tale-of-two-slits-part-1

    SvaraRadera
  8. http://motls.blogspot.de/2012/10/evading-quantum-mechanics-again.html?m=1
    http://arstechnica.com/science/2012/09/demolishing-heisenberg-with-clever-math-and-experiments/

    It's a title of three lies because the authors aren't demolishing Heisenberg but working tightly within the framework he co-discovered; their maths isn't clever but rather completely trivial; and they have done no experiments.

    The subtitle of the Ars Technica article is "Good, general measurement choices eliminate uncertainty." Thank God, we may return to the 19th century again.

    The paper on which this hype is based upon is trying to do "pretty much the same general thing" as the guys abusing the misleading concept of a weak measurement. They just have a different name with different misinterpreted maths for the same thing, "quantum non-demolition experiments".
    Whenever a physical system carries at least some information – whenever its Hilbert space is at least two-dimensional – there will always be observables that don't commute with a given one (unless it is a multiple of the unit matrix, λ⋅1: and these special matrices don't measure anything because their eigenvalues λ are constant and independent of the state of the physical system) simply because matrices 2×2 and larger don't commute with each other. It's that simple.

    The other matrices that don't commute with a given one aren't artificial in any sense. They're pretty much as fundamental as the original matrix. Just think about the three Pauli matrices: they describe three components of the spin and all these three components are clearly equally natural; after all, they are related by the rotational symmetry. The most widespread example – position and momentum – really conveys the general story, too. The position operator and the momentum operator don't commute with each other,
    [x,p]=iℏ
    and it has consequences even for position itself because the time-derivative of the position is the velocity v which is nothing else than a multiple of the momentum, v=p/m. So the position doesn't commute with its own time derivative which implies
    [x(t),x(t′)]≠0.

    SvaraRadera