måndag 30 maj 2011

Antimatter asymmetry?

From Fermilab Today. CMS scientists recently measured the ratio of W+ to W¯ production in proton collisions at the LHC.

It is often said that a proton is made of three quarks: two of the same type, called up quarks, and one of a different type called a down quark. But that's not the whole story. In the space between these three stable quarks there is a boiling soup of quark–antiquark pairs. That is, a quark and an antimatter quark spontaneously come into existence, drift a while, and then recombine, destroying one another. This happens all the time— in every proton in every atom of every cell of our bodies, and in all of the matter in the universe.

When two protons collide in the LHC, most of the individual quarks miss each other. Often only one quark or antiquark from each proton collides directly. When an up quark collides with an anti-down quark, the two can combine to form a W+ boson; similarly, a down quark and an anti-up quark can combine to form a W¯ boson. In both cases, an antiquark is involved. Thus, each of the millions of W bosons produced at the LHC must come from at least one of these transient particles, caught before it had a chance to sink back into the soup.

CMS scientists recently measured the ratio of W+ to W¯ production in proton collisions at the LHC. The number of W+ bosons exceeds the number of W¯ bosons by about 40 percent, partly because each proton has two stable up-quarks for every stable down-quark. However, the exact ratio also depends on the density of the quark-antiquark soup.

My comment: This express asymmetry in favor of antimatter? Note the different quarks have different mass and hierarchy. Kea today: the remaining mystery is why the scales have the ratios that they do, namely roughly 2 and 36 for the down/lepton and up/lepton scales resp. Here with quarks we talked of protons, fermions. Has they also an hierarchy? l-adic /p-adic?

Counting W+ and W¯ bosons yields new insight into the dynamic structure of protons, which is too complicated to compute from first principles with current techniques. It also informs predictions of new physics: The rate at which hypothetical particles would be produced depends on the density of quark-antiquark pairs, for the same reason that W bosons do. It is important to know the thickness of this soup when imagining what else might spring from it.

— Jim Pivarski

lördag 28 maj 2011

Interpretation about Copenhagen interpretation.

Lubos has turned his coat? He looks at the Copenhagen Interpretation, and describes the "Lumo interpretation". This time he finds the same solution as Matti has found long ago. Light-like CD cones. Personal ones. And he looks at paranormal phenomena. Amazing!

I cannot hold back a small comment:
Lubos just realized the fact that selves has their own lightcones of reality and perceptions. So consciousness is different too, and the observer-effect. The collapse gives the information and is a result of the measurement. Information is not material, not bound to material patterns etc. So what is it?

The contenta is that he discovered the Zero Energy Ontology :) Maybe also the p-adic algebra :) Isn't that ironic?

Looks like he finally has began to THINK. I am sure he is good in the thinking process.

If he only stopped insulting people.
Ulla.


But I was wrong.
Lubos did not start thinking.
But I am of course no physicist, remember that Lubos. I have only studied this for some years.

Earlier he looked at the absent dark matter. Ironically TGD has been accused exactly because Matti talks of dark matter much. But what does the dark matter mean? Has it to be exotic? I guess no. So both are right? In fact, Kea also talks of the absense of dark matter. She can explain what it is.

Something Lubos has to do to save his face!

Has he become a real crackpot? He talks himself much of insane crackpots. I hate that word. It has done so much harm. So many carriers spoiled. For what? A fantasy? There is no M-theory, and maybe Lubos has been forced to realize that at last. So he goes back to see where it went wrong.

I am sorry, but this is creating more fuzziness than it solves. Lubos has much left to learn, and he must leave his stubborn belief and start look at realities and experiments in biology, condensed matter and quantum computer theory. I see not much progress in this, just the same old story.


To his text:

Pretty much everyone misunderstands the basic points of the Copenhagen interpretation, Lubos claim. I believe him.
There was no fundamental disagreement about the meaning of quantum mechanics among those people. Obviously, many other people such as Albert Einstein, Erwin Schrödinger, or Louis de Broglie didn't ever accept the Copenhagen interpretation but they didn't have any alternative.

Lots of fringe stuff, garbage, and crackpottery was later written by various people who weren't really part of the Copenhagen school of thought but who found it convenient to abuse the famous brand. That's why one can also hear that the Copenhagen school may (or even must) interpret the wave function as a real wave that collapses much like a skyscraper when it's hit by an aircraft on 9/11.


But nothing like that has ever been a part of the Copenhagen school of thought. If you open any complete enough description of the Copenhagen interpretation or if you look at Bohr's or Heisenberg's own texts, you will invariably see something like the following six principles:
  1. A system is completely described by a wave function ψ, representing an observer's subjective knowledge of the system. (Heisenberg)
  2. The description of nature is essentially probabilistic, with the probability of an event related to the square of the amplitude of the wave function related to it. (The Born rule, after Max Born)
  3. It is not possible to know the value of all the properties of the system at the same time; those properties that are not known with precision must be described by probabilities. (Heisenberg's uncertainty principle)
  4. Matter exhibits a wave–particle duality. An experiment can show the particle-like properties of matter, or the wave-like properties; in some experiments both of these complementary viewpoints must be invoked to explain the results, according to the complementarity principle of Niels Bohr.
  5. Measuring devices are essentially classical devices, and measure only classical properties such as position and momentum.
  6. The quantum mechanical description of large systems will closely approximate the classical description. (The correspondence principle of Bohr and Heisenberg)
Note that the very first point says that the wave function is a collection of numbers describing subjective knowledge. That doesn't mean that in practice, everything will be always subjective - or whatever the spiritual people have attributed to quantum mechanics. Of course that constant interactions between parts of the world - and different people - pretty much guarantee that they have to agree about many "objective properties". But as a matter of principle, this rule is important for quantum mechanics and Werner Heisenberg has never left any doubts that this is how one had to interpret it.

Heisenberg would often describe his interpretation of the wave function using a story about a guy who fled the city and we don't know where he is but when they tell us at the airport they saw him 10 minutes ago, our wave function describing his position immediately collapses to a smaller volume, and so on. This "collapse" may occur faster than light because no real object is "collapsing": it's just a state of our knowledge in our brain.

In practice, everyone can use pretty much the same wave function. But in principle, the wave function is subjective. .. . So A and B will have different wave functions during much of the experiment. It's consistent for B to imagine that A had seen a well-defined property of S before it was measured by B - but B won't increase his knowledge in any way by this assumption, so it is useless. If he applied this "collapsed" assumption to purely coherent quantum systems, he would obtain totally wrong predictions. So the wave function is surely subjective if one wants to obtain a universal description of the world.
This is the problem between SR and GR, the quantum gravity. How can a subjective Universe become part in an objective Universe? A collection of probability amplitudes combined differently, says Lubos, before the squared 'absolute values' of the combinations are interpreted as probabilities. This is the essence. A superposition can remain as a probability amplitude if they are not measured, squared, laid on-shell. The Schrödinger cat can be both living and dead at the same time. The cat can have the superpositions in the entangled quantum wavestate and some probabilities can be measured by him or someone else that is also entangled. This has been shownin quantum computer science. It is possible to know 'more than everything'. Then Lubos continues:
All probabilities of physically meaningful events may be calculated in this way - as the squared absolute value of some linear combination of the probability amplitudes.
This is false. All probabilities cannot be squared, only those that become 'subjective' in 'reality'. The rest continues unmeasured.

The notion of an objective collapse was introduced by John von Neumann in 1932 and he was clearly not a part of the Copenhagen school of thought, says Lubos. So no objective measurement is possible. The measurement is always subjective, and everything that happens with the wave function has to be subjective as well.
The laws of physics predict that with the state above, there is a 36% probability that we will measure the cat to be alive and 64% probability that it is dead. Just to be sure, there is a 0% probability that there will be both an alive cat and a dead cat.
Confusions between probability amplitudes and actual measurements, says Lubos, makes people say the cat is both living and dead. Well, in a way he is right, and at the same time it is like an scientist measuring utterly complex things summaristically. The state of being, ontology, both life and death is something we don't know what it is. If we take something more simple like color? Does it change the outcome? But color is an incoming qualia and also impossible today to say exactly what it is. Bodylength vary with as much as 7 cm depending on position and momentum. Weight vary much more. Is there anything that is stable? Maybe a chrystal? But then there are very few probability amplitudes.... Not even ordinary matter is stable.
So the measurement must be done for the momentum and position, in this time alone. Next time the situation vary. So the outcome is influenced by something else too? An oscillation coming from ?
Only one of the options with the nonzero entries, "dead" or "alive", will occur, and the probabilities are 64% and 36%, respectively.
How can those probabilities be measured? They are only probability amplitudes. The outcome may be a Bell curve? But in what form? Today we simply cannot answer. This must be made by assumptions only.
There is no possible answer of the form "half-dead, half-alive"
I think many cats can be half-dead. Old ones that barely function, new ones that not yet are developed. When is a cat 'almost dead' or dead. For people there are a big debate about this. What is 100% alive? 100% health? This is a bad joke.
The Hamiltonian evolves the density matrix and dictates which states are "observable" in the classical sense. It's very clear that once we learn that the cat is alive, even though the chance was just 36%, the number 64% has to be replaced by 0% while 36% jumps to 100%.
Ooops! What is this? Ad hoc! The 'collapse'! Why not try superposition of states? No collapse happen. The cat can be even 98% dead :)
It makes no sense to claim that it's "predetermined" that the cat would be seen as alive. The free-will theorem, among other, morally equivalent results, shows that the actual decision whether the cat is seen alive or dead has to be made at the very point of the spacetime where the event (measurement) takes place; it can't be a functional of the data (any data) in the past light cone.
This is the problem of the quantum jump, discussed by Matti. Is the jump always straight forward, then there is no change, no probability amplitude. What can invoke on the desition? Certainly the history very much, the habit, attitudes, old thinking. "It has always been like this". This was realized by Einstein. In his black hole thinking it was the environment (gravity?) that 'collapsed' the Minkowskian light cone. Free will must be thought as a desition to jump somewhere not computated in advance? What can do that? A surplus of energy? This question cannot be reduced to this 'measurement' here and now. In a comment Lubos say: the random aspect of any event that is predicted with a different probability than certainty - different from 0% and different from 100% - is decided at the very point of the spacetime where this event or measurement takes place.

If you have a perfect memory, then quantum mechanics predicts that you will remember all your past observations accurately with probability 100%. Because it's 100%, no "random generator" is needed.

The claim I am making - and Conway and Kochen are making - doesn't mean that the present has no relationship to the past. It says that the information that is produced by random outcomes now - and recall that the information is given by

-sum p_i ln(p_i)

so it vanishes if all p_i are equal to 0 or 1 - doesn't have any relationship to the past. It is literally random and decided right now, not as a function of any "hidden variables" that were inherited from the past. This is proved by the free-will theorem. However, the *probabilities* that one gets one thing or another *are* of course calculated from the knowledge about the state in the past, from the initial wave function if you wish. If all the probabilities are 0% or 100%, then the measurement produces no information because it's guaranteed in advance, so the question where this (empty) information came from is vacuous. Well, I am not convinced. A computation is a measurement.
Even more importantly, people often say that "rho = rho1 + rho2" decomposition of the density matrix means that there are "two worlds", one that is described by "rho1" AND one that is described by "rho2". (Similarly for "psi", but for "rho", the comments are more clear.) But this is a complete misunderstanding of what the density matrix and addition means. The density matrix is an operator encoding probabilities - its eigenvalues are predicted probabilities. And we're just adding probabilities, not potatoes.
And he continues on the theme. No, I can't agree. Superposition of states do not have to collapse. There can be and there are two worlds. Or three worlds, one of the 'subject' (observer) one of the 'object' (cat) and one of the probabilities (quantum world). Exactly WHAT is manifesting itself, and why? Note that the quantum world is not interactive with the cat, usually, only through windows. And to measure one character momentarily doesn't mean the whole entangled cat vanish, nor the probabilities vanish, only that the subject gain massless information. This is just rubbish. Look: "It is not possible to know the value of all the properties of the system at the same time; those properties that are not known with precision must be described by probabilities. (Heisenberg's uncertainty principle)" The Universe is digital.

The "Lumo interpretation" clearly:
you describe the system by a density matrix evolving according to the right equation and it is always legitimate to imagine that the world collapsed to an eigenstate of the density matrix and the probabilities of different eigenstates are given by the corresponding eigenvalues of the density matrix. Incidentally, this also works for pure states for which "rho = psi.psi". In that case, "psi" is the only eigenstate of "rho" with the eigenvalue of "1", so you may collapse into "psi" with 100% probability which leaves "rho" completely unchanged. ;-) The only illegitimate thing to imagine is that the world has collapsed into a state which is not an eigenstate of "rho".
Uncertainty principle.
It means that generic pairs of properties,- but pairs of projection operators describing various Yes/No properties of a system - can't have well-defined values at the same moment. Pairs are usually oscillating, as instance solitons. Only (magnetic) monopoles can be 'singular'. What about the possible Higgs boson? Can it be a monopole?
Especially John von Neumann, before he began to say silly things, liked to emphasize that the nonzero commutators and the Heisenberg uncertainty principle is the actual main difference between classical physics and quantum physics. If the commutators were zero, the evolution of the density matrix would be equivalent to the evolution of the classical probabilistic distribution on the phase space.
What is this! A measurement without observer? A dead Universe?
Because the projection operators P and Q corresponding to two Yes/No questions about a physical system typically don't commute with one another...
A non-commutative digital Universe? Made of pairs Yes/No properties? But there must be some correlation? Some entanglement, both cannot be Yes. And two questions can also be dependent on each other. Ok, typically...
This main principle is the main "underlying reason" why the GHZM experiment or Hardy's experiment produce results that are totally incompatible with the classical or pre-classical reasoning. The classical reasoning is wrong, the quantum reasoning is right - and the nonzero commutators in the real world are the main reason why the classical reasoning can't agree with the observations.
Complementarity principle.
Bohr's favorite principle shows that the systems exhibit both particle-like and wave-like properties, but the more clearly you can observe the latter, the more obscure has to be the latter, and vice versa.
Shows no collapse! The wave-like patterns can be described as invisible fields? I remember Lubos disliked the work of the Zeilinger-group in Wien. They have clearly shown the duality and that both can coexist in an oscillation; expanding-compressing space/time.

The measurement devices follow the rules of classical physics. This is another source of misunderstandings, says Lubos. The apparatus behaves as a classical object. So in particular, you may assume that these classical objects - especially your brain, but you don't have to go up to your brain - won't ever evolve into unnatural superpositions of macroscopically distinct states. This was a point in which the Copenhagen interpretation was incomplete. They didn't quite understood decoherence, he claims.
Decoherence shows that the states of macroscopic (or otherwise classical-like) objects whose probabilities are well-defined [does he mean squared, on-shell?] are exactly those that we could identify with the "classical states" - they're eigenstates of the density matrix. The corresponding eigenvalues - diagonal entries of the density matrix in the right basis - are the predicted probabilities.
How can probabilities be well-defined? By the non-existent collapse? No!
However, they were saying that there is such a boundary at which the quantum subtleties may be forgotten for certain purposes and they were damn right. There is such a (fuzzy) boundary and we may calculate it with the decoherence calculus today. The loss of the information about the relative phase of the probability amplitudes between several basis vectors is the only new "thing" that occurs near the boundary.

The boundary is always fuzzy because decoherence is self-evidently a continuous process in which the off-diagonal elements quickly (at a certain time scale) but gradually drop to zero. (in comments)
This is only speculations. On the contrary the fuzziness problem of boundaries points to QM-effects (as instance quantum tunneling, sum over paths).
when the history contains a large collection of decohered outcomes, all observers may ultimately reconstruct the same macroscopic past. It's just the "intermediate state of affairs" before the individual events that are "fuzzy" - in Feynman's approach, one has to sum over all histories how to get from A to B so everything that happens between A and B are just "intermediate results" that don't have any objective properties.
But if the intermediate states gets inputs from outside? Information loss or gain? Look at photosynthesis! There are much happening in the intermediate states. How lock them into a black box? Topology/memory? Entropy?
A commenter, Illuminated:Nature is inherently quantum. But human language and thoughts have a classical structure despite the fact that they emerge from an underlying quantum substrate. So, to describe nature using language, equations and thoughts forces us to come up with a classical description of something which is inherently quantum. From a classical point of view, we have to treat the quantum nature as some black box oracle which takes in classical inputs and spits out classical outputs. If we insist upon coming up with a classical model of something which is inherently quantum, that is like constructing mechanical models of the electromagnetic aether. This leaves us with extremely inefficient classical auxiliary constructs like wave functions or density matrices which are exponentially long vectors, or a functional integral over some extremely large functional space of paths. Asking about the actual value of a particular wave function component, or the nature of the functional space of all paths is like asking about the cogs and gears making up the electromagnetic aether.
If we wish to calculate, until the day practical quantum computer simulations become available, we have to resort to classical equations. That's why we have to resort to wave functions, density matrices and path integrals. But otherwise, we should treat nature as a black box oracle. What do you do with black boxes? You only consider the inputs and outputs.

What the "subjective" nature of wave function means that wave function is really a gadget that collects all the information about the system that may be known to an observer, and that may be used to predict the future. The only subjective thing about it is that at different points of space and time, i.e. at locations of different observers, different things about the past are known and used as the input - initial wave function - while the rest is to be calculated probabilistically.
Predict is to computate probable outcome and project them into the future. Usually we use information in climps, analogies. So there are much implicit information in our predictions. Note the personal light-like CD a la TGD.
In a comment: When you say that the state is "psi", it means that "psi" is what you get by time evolution from all the most recent past data that you could have measured or otherwise learned.

Another observer CD who studies you may, however, use a different state vector such as 0.6 psi1 + 0.8 psi2 where psi1 is the state in which you hold a particle apparently in state psi1 and say that it's in state psi1, and another state where you hold a particle in state psi2 and prepare to do the calculation based on this assumption that it's in state psi2.

The observer CD may only decide that psi1 is correct and psi2 is wrong *after* he actually measures the combined system of you and the particle. Before he actually gets this knowledge, he will be using a different function than you. The wave function that a given observer is using depends on what he already knows about the outcomes that have taken place. This is the sense in which it's subjective.


Brain gets its information in state reduction process, or decoherence. 40Hz state reveals the much work done, suddenly 'collapsed' into 7Hz when the information is gained/selected. But much information (as probabilities) are rejected at the same time. This could also be compared to a 'sum over paths' ? Thanks for this. But this 'observer' CD is from TGD.

For more about the links between decoherence and the second law, see e.g. http://arxiv.org/abs/gr-qc/9402006

In comments: Even a geometrically small brain composed out of a small number of atoms will be able to "perceive" almost perfectly decohered outcomes - one may increase the amount of decoherence by extending the period of time in which the decoherence is taking place. However, again, if the system is small and the time is short so that the decoherence is very far from perfect, the objects shouldn't be imagined to observe sharp outcomes of any measurements. Such objects have to be treated as coherent quantum objects that don't perceive anything and that always have a chance to "forget" what they already learned.

In some sense, it's analogous to virtual particles. Virtual particles may momentarily violate inequalities for the energy, or tunnel through a barrier, and so on, as long as dE.dt doesn't exceed hbar, the mandatory uncertainty limit. Similarly, objects that are too small and don't perfectly decohere may "perceive" some outcome, but their perception may be undone in the future. This is the quantum realm. Sharp perceptions of outcomes that hold and that can't be revised only occur in the classical limit where the decoherence is perfect and completed. In the real world, it's never quite perfect, but because rho(A1,A2) goes like exp(-exp(t)), one gets very quickly extremely close to the ideal limit.


I think of dreaming state contra waked state. Lubos: All the imperfectly decohered perceptions could be subject to revisions - some of them would be illusions. - if you were just a generic microscopic system that has no chance to have macroscopic perceptions, the observer would have to describe you properly in the QM framework using a wave function that never collapses. This is the superorganisms, the bacterias as whole cells, not genomes. Or cells in the body, mitochondrias? What determines the quantum:classic measurements? Not size alone! Also the complexity, networks, expansion- compression. Rigidity:flexibility? Einstein talked of the inertia.
Lea Luke: Concerning the abstract "non-reality" of the complex vector spaces (Hilbert spaces?) themselves: I read that they are real in the sense that they carry momentum and energy even when no particles are there. Lubos answer: it has a relation to classical logic - rules for adding and multiplying probabilities in classical physics. Whenever the questions whose probability we calculate by QM are correctly formulated, the probabilities follow all the classical laws. P(A or B) is equal to P(A)+P(B)-P(A and B), P(U and V) = P(U) P(V) if U,V are independent, and so on, and so on. But the new thing is that in quantum mechanics, probabilities themselves are "composite", they're of the form

P(A) = sum (psi_c psi*_d A_cd)

where A_cd is the matrix representing the projection operator associated with property A. Note that the probability P is schematically "psi squared" - it's made of something more fundamental, something that is complex and may interfere.

The other, often emphasized, difference is that in quantum mechanics, you may *only* talk about the classical probabilities of properties that have already decohered from their alternatives. There is no probability of "intermediate" properties of a system prior to the measurement that would obey the classical laws - there are only wave functions, the more fundamental complex probability amplitudes, and these amplitudes are being added - before they're squared at the moment of the measurement.


Oh, my dear God. Would this psi squared be BETTER than p-adics and primes? These guys REFUSE to discuss with Matti??? I cannot believe my eyes. WHAT are they doing?

Correspondence principle
Both Bohr and Heisenberg also emphasized the correspondence principle, e.g. that the quantum equations reduce to the classical ones in the appropriate limit. If you study e.g. the evolution of the expectation value of "x" and "p", they will evolve according to the classical equations. It's the Ehrenfest theorem.

As discussed in the previous point, it is not enough to show that the world of classical perceptions will occur in the limit as well: we also need to know that the right "basis" will become relevant in the classical limit. Nevertheless, with the extra additions that had been demonstrated in recent decades, I mean decoherence, we know that it is true that the classical perception and choice of states does occur in the appropriate classical limit.
The holographic principle? Quantum world manifest on a surface (2-D, that is spacetime sheets)? What are the rules? This seems nonsense to me. Sorry Lubos.
This boundary doesn't mean that quantum mechanical laws ever break down. They never break down. What's true is that for large enough systems, one may use - and should use - the approximate classical scheme (the word "approximate" sounds too scary but in reality, these approximations are super excellent for all practical and even most of the impractical purposes) of asking questions because one may show that it becomes legitimate.
Comment by pbfred: Einstein forgot about the disadvantages of a principle theory and developed General Relativity which, if not a principle theory, builds on the principles of special relativity which cannot be visualized. This set the stage for Quantum Mechanics which itself is nonvisualizable and base on a good many principles . The biggest one is the Uncertainty Principle. Another is the principle of Wave-Particle Duality. The Pauli Exclusion Principle is just an algorithm that for some reason seems to work. Pauli admits in 1945 that he could find no reason as to why it worked. Of course, this applies to the Uncertainity Principle for it just happens to work--but why does it work?

In the comments from Lubos, a comment about Einstein worshiping:
Einstein has done great things. When I was a high school student, I read his Mein Weltbild obsessively and about 5 times. He was my God. And I did fundamentalistically believe that realism had to be correct up to some point, too. Then I tried to explain the simple patterns in atomic physics etc. and I had to be stealing pieces of quantum mechanics to explain it in my "would-be" realist framework. Of course, at the end, I had to rediscover/steal all of quantum mechanics.

When I just went through all the evidence and the critical properties and experiments, it became manifest that despite the "intuitive naturalness" of Einstein's expectations, he was just wrong. Physics is not about uncritical worshiping of cults. Of course, as time continued, it became clear to me that Einstein was fundamentally wrong about many things. It's not just quantum mechanics. It was his expectation that the strong force and the weak force (nuclear forces) would ultimately go away and become non-fundamental, and so on.

Of course when I was 15, but no longer at 16, I shared those beliefs. But these beliefs are just wrong. Despite his unusual and revolutionary contributions in a certain era, Einstein just wasn't able to stay in the "actual" top of physics after the mid 1920s, despite the fact that he of course remained the most popular and appreciated guy with the media. But science has made lots of progress after 1916 when GR was being completed. In my view, the quantum revolution was more profound than the relativistic one. It wasn't done by one person only - like Einstein's relativity - but the total importance was bigger. One must separate personal appraisals and emotions from science. It's just true that Einstein, despite his giant contributions and likability, wasn't at the top of the physics research since the 1920s. The idea that revolutionaries lose some flexibility to understand important new developments is much more universal. Think about Dirac vs renormalization, Feynman vs strings, and many many other examples.

If someone can't accept such things given the evidence, he's doing cult worshiping and not science. Science doesn't have its infallible prophets. It just doesn't work in this way. So I find the very focus in your comment on exciting emotions about Einstein something that shouldn't be done if we are supposed to talk about science. It's emotional harrassment, not science. Einstein was wrong whether it sounds popular or not.


Brian: "
reading this blog is sort of like a physics PhD's version of right-wing talk radio." Ye, why should we? Lubos will never change.

måndag 23 maj 2011

All in the mind.

BBC has a new serie of programs called All in the mind. Exploring the limits and potential of the mind. Link to ‘Medical Matters’ page where you can get the podcast.

2 Nov 10, Battlefield Mental Health, antidepressants and morality and Community Treatment Orders. Download.
The first program; War and ethics. The ethics of forcing psychiatric patients to take treatment in the community and whether antidepressants change our moral decision-making; military mental health, ‘community treatment orders’ or CTOs; the antidepressant research, that has found the SSRIs alter how people respond to moral dillemas like the ‘trolley problem‘, is also an intriguing look into the neurochemistry of ethical choice.
9 Nov 10, Brain injury in young offenders, an unusual case of memory loss, plus how to stop worrying. Download.
16 Nov 10, Forensic science and psychology; Testosterone and City traders; Suicide terrorism. Download.
23 Nov 10, how the senses effect memory, food and design. Download.
30 Nov 10, John O'Donoghue, Digital memories and the Work Capability Assessment. Download.
7 Dec 10, Mapping the brain, online psychological support for cancer. Download.
14 Dec 10, how social networking is changing adoption. Download.
21 Dec 2010, The 1:4 statistic; Falklands Diary & Ancestors and Intelligence. Download.

and on TV.
19 Apr 11: Anorexia and autism
26 Apr 11: Psychopathic brain
03 May 11: Personal space & claustrophobia
10 May 11: Ostracism

and

17.5.2011. Metin Basoglu,Ted Kaptchuk, and Irene Tracey.

Thousands of people across the world who survive devastating earthquakes are living with the trauma of the disaster compounded by the experiences of aftershocks. Claudia Hammond talks to Metin Basoglu, a psychiatrist who has developed a method of mass psychological treatment for survivors of disasters like these, based on his research with over 10,000 people who lived through the Turkish earthquake of 1999. Could a single session of this kind of therapy really make a difference?

How strong is the placebo effect? Can sugar pills make you feel better even when you know that's exactly what they are? Claudia talks to Ted Kaptchuk from Harvard University about his findings that for people with Irritable Bowel Syndrome, knowingly taking a placebo pill twice a day improved their symptoms. But is it the placebo or the ritual that surrounds taking it?

Professor Irene Tracey, pain researcher at Oxford University, says the power of placebo is all about manipulating expectation of the person taking it. She believes this research still required deception. Her research on pain and the brain had led her to suggest that rather than using placebo, changing people's expectations of active drugs could be medically beneficial.

Also - why to read someone else's emotion your own face needs to minutely mimic their facial expressions. When the brain gets feedback from the face it gets information on what that person is feeling. And why Botox, which paralyses those muscles reduces the ability to understand emotion.

söndag 22 maj 2011

The castle is crushing down?

Nimas 'new' talk (from 2.5.) with many new ideas. "You can identify everything in 3-D space" and he says infinities are important.
Spacetime, Quantum Mechanics and Scattering Amplitudes

sun 22.5. Matti has written a new chap. Motives and Infinite Primes.
Infinite primes, integers, and rationals form a hierarchy completely analogous to a hierarchy of second quantization for a super-symmetric arithmetic quantum field theory.


May 16, 2011: The Kinematic Algebra From the Self-Dual Sector.
Ricardo Monteiro and Donal O'Connell, arXiv:1105.2565

KITP Informal Discussion, So I must look what it could be.

We identify a di ffeomorphism Lie algebra in the self-dual sector of Yang-Mills theory, and show that it determines the kinematic numerators of tree-level MHV amplitudes in the full theory. These amplitudes can be computed o ff-shell from Feynman diagrams with only cubic vertices, which are dressed with the structure constants of both the Yang-Mills colour algebra and the diffeomorphism algebra. Therefore, the latter algebra is the dual of the colour algebra, in the sense suggested by the work of Bern, Carrasco and Johansson. We further study perturbative gravity, both in the self-dual and in the MHV sectors, nding that the kinematic numerators of the theory are the BCJ squares of the Yang-Mills numerators.

Introduction: A deeper study reveals concrete relationships between gauge and gravity scattering amplitudes. Early in the first heyday of string theory, Kawai, Lewellen and Tye (KLT) derived a relation, valid in any number of dimensions, expressing any closed string tree amplitude in terms of a sum of products of two open string tree amplitudes [1]. Taking the field theory limit of the string amplitudes, these KLT relations imply in particular that any graviton scattering amplitude can be expressed in terms of a sum of products of colour ordered gluon scattering amplitudes.

A more insightful \squaring" relationship between gauge theory and gravity amplitudes was conjectured recently by Bern, Carrasco and Johansson (BCJ) [2], and then proven by Bjerrum-Bohr, Damgaard, and Vanhove [3] and by Stieberger [4]. These authors have shown that scattering amplitudes in gravity can be obtained from Yang-Mills scattering amplitudes, expressed in a suitable form, by a remarkably simple squaring procedure.

[1] H. Kawai, D. C. Lewellen, S. H. H. Tye, Nucl. Phys. B269, 1 (1986).
[2] Z. Bern, J. J. M. Carrasco, H. Johansson, Phys. Rev. D78 (2008) 085011. [arXiv:0805.3993 [hep-ph]].
[3] N. E. J. Bjerrum-Bohr, P. H. Damgaard, P. Vanhove, Phys. Rev. Lett. 103, 161602 (2009). [arXiv:0907.1425 [hep-th]].
[4] S. Stieberger, [arXiv:0907.2211 [hep-th]].
[5] T. Sondergaard, Nucl. Phys. B821, 417-430 (2009). [arXiv:0903.5453 [hep-th]].
[6] N. E. J. Bjerrum-Bohr, P. H. Damgaard, T. Sondergaard and P. Vanhove, JHEP 1006, 003 (2010) [arXiv:1003.2403 [hep-th]].



KITP talks, many of them.

"Cubic Feynman diagrams appear quite naturally in the self-dual sectors." Fits nicely with Wilzcek too. In fact TGD has also these.
"On the gravitational side, we have seen how to compute MHV amplitudes from cubic diagrams in light-cone gauge" they say, up to a mild assumption about the behaviour of the higher terms in the gravitational light-cone Lagrangian. The fact that the BCJ relations hold (at least on-shell) for all graviton amplitudes suggests that there is some cubic theory which can be used to compute these amplitudes."

They don't say quaternion?

They talk of "the self-dual Yang-Mills (SDYM) equations in Minkowski spacetime". Is that ZEO?

Seems the fantastic castle is crushing down? All the many different dimensions were simply illusions?

Why not look at TGD? What is so disgusting with it?

fredag 20 maj 2011

Death and the Loosening of Consciousness.

Toward a Science of Consciousness. Brain • Mind • Reality
May 3-7, 2011 Stockholm, Sweden. The eighteenth annual international, interdisciplinary conference on the fundamental question of how the brain produces conscious experience.
Keynote Speaker, Sir Roger Penrose, and featured speakers Luc Montagnier and Deepak Chopra and their staff. In the comment panel Leonard Mlodinow and Stuart Hamerhoff etc.
Penrose:
A profound puzzle of quantum mechanics is that the discontinuous and probabilistic procedure adopted for measurement is in blatant contradiction with the continuous and deterministic unitary evolution of the Schrödinger equation. An inanimate measuring device, being made from quantum particles, ought to follow the unitary laws, so many physicists take the view that consciousness is ultimately needed for measurement. I here express the almost opposite view that the unitary law must be violated for massive enough systems, and that it is consciousness itself that depends upon this violation, requiring new physics and exotic biological structures for its manifestation. The issue of what kind of universe history could provide laws fine-tuned enough for consciousness to arise will also be raised.

Interesting speakers:
Johnjoe McFadden, Surrey – The Cemi Field Theory: Gestalt Information and the Meaning of Meaning
Tarja Kallio-Tamminen, Helsinki – Quantum Physics and Eastern Philosophy
Paavo Pylkkanen, Helsinki – Bohmian View of Consciousness and Reality
Luc Montagnier, Nobel Laureate, Paris – DNA, Waves and Water
Giuseppe Vitiello, Salerno – DNA: On the Wave of Coherence
Jack Tuszynski, University of Alberta – Information Processing Within Dendritic Cytoskeleton
Anirban Bandyopadhyay, Direct Experimental Evidence for the Quantum States in
Microtubules and Topological Invariance
Germund Hesslow, Lund – The Inner World As Simulated Interaction With The Environment
Stuart Hameroff, UMC Arizona – Meyer-Overton Meets Quantum Physics

There was also the End-of-Life Brain Activity
Lakhmir S. Chawla, GWU – Surges of Electroencephalogram Activity at the Time of Death.
Peter Fenwick, London – Death and the Loosening of Consciousness, whom I'll quote here.

There is a growing awareness of the importance of end of life experiences. These comprise transcendental and spiritual features which support the dying through the last days of life, and paranormal phenomena around the time of death which are comforting for the bereaved. Our data base consists of retrospective and prospective studies of a population of carers in hospices and a nursing home in the UK, and a retrospective study of carers in Holland. Added to this are over 1500 accounts from a largely English sample of the general public in response to media discussions. The dying process as described by these people will be discussed. The dying process may start one to two years before death with a premonition about one’s own death. In the weeks before death there may be ‘visits’ by apparitions of dead relatives who indicate that they will soon return to accompany the dying person on their journey through death. As the process continues, some indication may be given by these visitors of the likely time they will return. Next, some people report that they transit between this reality and another reality consisting of love, light and compassion. At the time of death, light surrounding the body and shapes leaving the body are reported. Deathbed coincidences occur, when some kind of contact is made between the dying person and someone at a distance to whom they are emotionally close. This ‘connectedness’ seems to extend both to animals, which become distressed, and even to mechanisms such as clocks which are often reported to stop at the time of death. One hypothesis is that the process of death seems to be related to the stages of loosening of consciousness.
274 Death and the Loosening of Consciousness Peter Fenwick (Institute of Psychiatry, Kings College, Southampton Univ., London, United Kingdom) PL14

Free Will/Libet, Synesthesia, was also discussed.
Berit Brogaard/ Jason Padgett, The superhuman mind: From synesthesia to savant syndrome/
Geometric Synesthesia
Heather A. Berlin, Implicit self-esteem in borderline personality and depersonalization disorder
Matti Bergstrom, The statistical dispersion of particles in quantum physics is an error

There was also discusses healing, embodiement, telepathy and Eastern view.
Language and memory too.

måndag 16 maj 2011

Art and film helps to sell living technology and synthetic biology.

Synthetic biology presents one of the most dynamic fields of biotech research. As synthetic biology develops into an applied technology, it is important that scientists, stakeholders and the public communicate in an interactive way. Look at this picture how diverse areas it is.The Science, Art and Film Festival Bio:Fiction in 13-14 May 2011 at the Museum of Natural History in Vienna, Austria, offered a new approach to communicating and discussing synthetic biology. It brought together a public audience with scientists, social scientists, filmmakers and artists. The main objective of the festival was to provide information and dialogue about synthetic biology in an attractive and interdisciplinary manner.

They mean to emballage the concept in a pleasant manner so it can be easily selled to politicians. Do you buy it?

The Science, Art and Film Festival relies on three main pillars:

  1. Film festival: Bio:Fiction will include the world's first synthetic biology film festival. Our call for film submissions in 2010 triggered submission of 130 short films, of which 45 were pre-selected as being relevant to synthetic biology.
  2. Bio-art exhibition 'Synth-ethics': The Festival marks also the start of the art exhibition 'Synth-ethics' in the Museum, which presents biotech art objects related to synthetic biology.
  3. Science talks and panel discussions: The program is designed to attract scientists, social scientists, students, the media and the general public to allow for an interdisciplinary program and exchange of ideas.
What is synthetic biology?
This is how it is presented:
Synthetic biology is a new area of biological research that combines science and engineering. Synthetic biology emcompasses a variety of different approaches, methodologies and disciplines, with the aim to design and construction of new biological functions and systems not found in nature. Synthetic biology is based on genetic engineering but goes much further. In genetic engineering the goal was to manipulate an organism’s genes, usually by transferring one gene from a donor to a host organisms. Synthetic biology, on the other hand, aims at creating whole new biological functions, systems and eventually organisms.

1. Engineering DNA-based biological circuits, including standard biological parts:

Instead of just transfering one gene, a whole system is built in an organisms (e.g. an oscillator, an on-off switch, a more complicated multi-step chemical synthesis of a useful biomolecule, biocomputer).

2. Defining a minimal genome/minimal life (top-down):

Taking a bacteria that already has a very small genome (i.e. number of base pairs) and reduce it even further until the organisms cannot survive any longer. That way we can define and understand the smallest possible genome that still sustains life. This minimal life will also forma „chassis“ for hosting the biocircuits described above.

3. Constructing synthetic cells or protocells from scratch or bottom-up:

In an attempt to prove Pasteur’s “law of biogenesis” (Omne vivum ex vivo, Latin for, “all life [is] from life”) incomplete, scientists are now trying to produce synthetic cellular life form from simple chemical ingredients.

4. Creating orthogonal biological systems based on a biochemistry not found in nature:

All forms of life on earth use the famous DNA molecule. Now scientists are constructing different molecules with similar functions (e.g. the XNA, Xenonucleicacid) to construct living systems that have never existed before, as a way to avoid interference with naturally evolved DNA while doing biotechnology.

5. The chemical synthesis of DNA

So far DNA could only be created by life itself, but now special DNA synthesis machines can actually “print” DNA the way we want it. Scientist can e.g. download the genetic code of a virus (and eventually bacteria) and construct its DNA with this machine.

By applying the toolbox of engineering disciplines to biology, a whole set of potential applications become possible. Some of the potential benefits of synthetic biology, such as the development of low-cost drugs or the production of chemicals and energy by engineered bacteria are enormous. There are, however, also potential and perceived risks due to deliberate or accidental damage, as well as ethical questions. In order to ensure a vital and successful development of this new scientific field – in addition to describe the potential benefits – it is absolutely necessary to gather information also about the risks and to devise possible biosafety strategies to minimize them.

Adam and Eve and the Origin of Life? Is this art? The dance of Life?

And a whole list of links...

About synthetic biology
Societal aspects of synthetic biology
flesh eating hybrid robots

We should start talking ethics immidiately. Are human consciousness really ready for this? There should happen a giant jump for a better world, but we are even incapable of solving our big problems, as climate change, food-and water supply, minerals, sustainablity, problems with the monetary systems etc. The main problem for them all is lack of ethics.


Do we really think we should handle this one better? Are we dreaming? Or must there be a great extinction more?



The main problem are our illusions of personal body, and everything it needs. We laugh at the Egyptian Kings that had all supplies they would ever need with them in their graves. We are not at all better. Our illusionary body needs so much.


Still it is made from the environment that we destroy. So, in fact, we destroy ourselves.

And this new technology is linked to 'The New World Order', Illuminati. Is that a serious threat?


Concerns about violating nature are sometimes connected to the worry that creating synthetic cells would be playing God.
Are we godlike enough to pursue synthetic biology and protocell science? Some say "Yes", some say "No". Either way, the pragmatic worry about playing God is serious and deserves the full attention of all of us, even atheists.