Sunday 21 January 2018

Anistropy of the Universe

There are several Kinematic formalisms that describe the rotational anisotropy of the Universe. That the universe is isotropic, which with homogeneity comprises the Cosmological principle, was viewed as the natural extension to the Copernican principle. In truth it renders Einstein's field equations as analytically solvable.

The following article, entitled the Anisotropic Distribution of High Velocity Galaxies in the Local Group

https://arxiv.org/abs/1701.06559

highlights the potential problems with the Cosmological Principle assumption of isotropy, a problem for the standard Cold Dark Matter theory based on (a symmetric connection) GR which is highlighted in CMBR surveys


caltech link
https://ned.ipac.caltech.edu/level5/Sept05/Gawiser2/Gawiser3.html

A summary of the observational status is presented in the following article "How Isotropic is the Universe?" published on September 22, 2016 in Physical Review Letters,

https://arxiv.org/pdf/1605.07178v2.pdfg


With this observational motivation to hand it is interesting to consider as the authors do in this article
http://www.mdpi.com/1099-4300/14/5/958 the transition from early to late epoch universe: one initially dominated by spinning matter contributions to Torsion then latterly those non symmetric space-time connection contributions that come from the orbital angular momentum of the aggregate rotation of (the super clusters the make up the large scale structure of the ) the universe as whole.

In light of this we should reflect on the most appropriate kinematical objects and formalisms that should be deployed to describe such asymmetry.

Adding Structures to the Basic Manifold of Space

Einstein–Cartan, [EC] theory provides for an additional (through a space-time Torsional coupling) rotational (orbital and spin) degree of freedom as a metric but non-symmetric connection extension of the Einstein-Palatini (separate connection, verbein variable Variational) formalism of General Relativity.

EC represents a minimal extension of GR from which one can encapsulate any violation of the cosmological isotropy principle through a rotating Universe. Once such bulk rotation on aggregate is plausibly identified one must find a means for capturing the universes' inherently orientable character. By enlarging the means by which space-time can be connected and thus its set of invariance preserving co-ordinate transformations, spin and orbital angular momenta contributions from matter can be coupled to the asymmetric ("axial") torsion connection components resulting from a minimally extended Variational Principle.

Orbital Angular Momentum is a Poincare (Lorentzian cross translational) invariant in the purely kinematical arena of Special Relativity in which no dynamics is afforded to its inert (literally!) spatial substrate. Angular momentum is the archetypal pseudo vector as described by axb below.
In the matter encapsulating, more corporeal accepting General Relativistic dynamical arena it only retains a pseudo-tensorial (density) status. Space-time's malleability in the presence of matter (adhering to either Bose or more pertinently here Fermi statistics) invites orient-ability as well as the curvedness embodied in its holonomy. How do we build on the abstract notion of a rubber sheet manifold?

We look to add minimal set of structures to our abstract descriptive space:

https://www.mathphysicsbook.com/mathematics/topological-spaces/generalizing-surfaces/summary/

By adding metric structure lengths can be measured and compared at different points in the space if the space does not possess any strange shearing structures. Kinematically the most natural objects to represent spinning matter's frame dragging effects are two spinor-differential (multivector)-forms. Just as half-spin matter fields are most naturally modelled as invariants under the Special Linear Group, (the double cover of the Lorentz group) source term, the tangent space of space-time also affords a natural two-spinor character.




Multivector Calculus


Roughly, while vector objects when parallel transported once around locally flat infinitesimal parallelogram circuits return with same orientation spinorial objects require two circuits in order to return with an identical configuration.

In the language of multi-vectors an orientated parallel-piped volume is equivalent to a pseudo-tensor density with a leading determinant of the space time metric term.

https://en.wikipedia.org/wiki/Multivector
We can see that the "2-blade", 2 forms as an anti-symmetric of one forms defines an orientation. These two planes can be treated as one of the variables in a Variational principle. That is, instead of the (metric/verbein) and connection form we use these bi-vectors as fundamental spatial elements.

Invoking liberally the language of quantum theory, in such schema the irreducible parts of fermionic matter and bosonic space-time both share a common spinor structure.

Pseudo-vectors such as the archetype angular momentum vector are most naturally described by the (Hodge) dual of a multi-form. This is because in the multi-vector formalism, the dual object is intrinsic to the space. That is, no higher-dimensional reference space need be invoked to encapsulate an otherwise extrinsically defined orientation-inducing action of identifying an axis of rotation.

Duality in the sense can operate on the locally flat tangent space indices (of the verbein) as well as on the generalised co-ordinate space.


Using the former Hodge "star" facilitates the construction of both self-dual Curvature (of the connection) objects and self-dual two-forms of orientated 2-spaces (of parallelograms, say). Lagrangians that deploy these most natural Chiral 2-forms are de-facto oriented formulations of gravity.

http://www.mdpi.com/1099-4300/14/5/958

Compressibility of Form as a Measure of Depth (of Meaning)

Following Murray Gell-Mann we can see the development of Maxwell's equations from tensorial differential equations to a more stream-lined differential form.

Ultimately we can write Maxwells Equation in terms of differential forms: d*F=j and dF=0. That is it!

Gell-Mann follows the Computation crowd in referring to the high Information Content of this deceptively simple but dense form. Such simple algebraic form belies the sophistication of the associated differential geometry.




Might it be though that just because we are not schooled earlier in Lie Algebras or multi-vector calculus this merely appears more sophisticated?

An historical artefact in the evolution of our ideas perhaps?

If the greats upon whose shoulders we stand had revealed these formulations earlier (before the precedents) would we ascribe such high informational content to such dense constructions?

Reductio ad absurdum!



What does philosophy have to say of the Reductionist program?


Indeed what are the guiding principles of any Reductionist program applied to basic physics?

En route to revealing "Theories of Everything" there seems to be a tension between the search for a minimal set of objects that explain our reality and the deployment of a minimal set of guiding principles. Reductio ad absurdum is "a form of argument which attempts either to disprove a statement by showing it inevitably leads to a ridiculous, absurd, or impractical conclusion". The program applied to fundamental physics is by this dint self explanatory? A discussion of the Naturalness program and its partial success as a guiding principle can be found here:


https://arxiv.org/pdf/1710.07663.pdf

Our aims are more modest but probably no less coherent here. Quite generally I am just trying to keep a foothold where I can on the theoretical physics front which may just be just as penetrable to those not knee deep in their own cognitive dissonance in hoc with the Research Grant providers.

For the most part the maths has gotten away from me so if you are similarly jaded by the arcane mathematics that abounds lets try keep an eye on the bigger picture, that is the metaphysical questions that might still be in play. Lets try to to take this one easy.

You can do things with numbers (abstracted from collections of objects) : add, subtract, etc. that is perform operation on them with operators. By obeying the rules of the operating you can move up and down number line. You can create short-hand for multiple additions of those number (call that the x rule). You can create a shorthand of multiple multiplication - call that the exponentiation operation, ^. Mixing up these operations allows you to play with these objects efficiently. That is the law of distribution you ignore at high school taking them as obvious. 

The Algebra of Geometry


We can start to drop these assumptions and look at the implication that this freedom affords us. Beside is illustrated the non-commutative transformation operations of reflection through an axis of symmetry followed by a clockwise rotation. The order of such distinct operations (transformations) matters.


http://www.math.brown.edu/~banchoff/Beyond3d/chapter9/section02.html
Now if you extend the field of number (objects) to a plane (rather than line) of numbers (complex ones) some rules of algebra (the Fundamental rule of Algebra), become simpler in this broader space of numbers-objects and operations. So by giving greater freedom (increased number of allowed operations) to a greater number of objects we end up with greater simplicity. Below we see how to factorise a cubic polynomial using a high school long-division technique. The divisor of our cubic is a quadratic and gives rise to a linear "quotient" function with zero remainder .
http://mcuer.blogspot.com.es/2007/10/precalculus-25-fundamental-theorem-of.html


We can further factorise the quadratic even though it does not cut the Real axis. It possess rather Complex roots. We see then that any polynomial has the same number of roots as its degree. By widening the field of numbers over which we do our analysis more insight and greater simplicity is achieved.


To summarise, we say that the Algebra of Real numbers (object-solutions) that can be acted on by x,/,^ (that latter exponentiation) operators and are thus deemed to closed under these operations obeying laws of distributivity, commutativity and associativity. The object set can be extended to complex numbers (the operations generalised to complex exponents) and we note that in this larger formalism complex-valued polynomials have much simpler root rule forms than real valued ones.

Guiding Principles


Back then to the guiding principles. In mathematics the greater the number of axioms that one is required to adhere to, the more restricted the set of solutions that the resulting self-consistent theory delivers. At some point indeed there is a unique solution as only one "reality" satisfies all the axioms. Conversely as you peel away the axioms, slackening the freedom in the system, more possibles plausibly satisfy the loosened laws of your world.

A search for the most restricted set of (guiding principles) axioms allowing for multiple realities or the search for a unique reality built from a tightly bound lattice of constricting premises? More objects, more allowed operations, a proliferation of laws? I guess I am merely asserting that even high end physics has an axiomatic front to it even beyond hard core axiomitisers: solve a path integral based on an Action (principle) constructed with holomorphic complex functions do something with or without boundary conditions.

lessWrong

Perhaps the stationary action principle glorious as it is, is too restrictive? In it we assume sets of underlying fields and restrict yes to second order functional at most so that causality is respected. Not much else but for nature leaning heavily towards the path that minimises the difference between the Kinetic, T and potential energy, V of the system. A very narrow set of guiding principles that delivers "on-shell" equations with plenty of freedom to find them experimentally.

In science, the reductionist program seeks to reduce the world to a set of indiscernables that may or may not obey lots of laws or be founded on a limited set of guiding principles (axioms). The question is by which rule of simplicity are we guided? By the number of principles, number of resulting laws or numbers of free parameters linking a limited set of objects?

To the side we delineate the stuff of the universe into its Bosonic and Fermionic fundamental constituents. As collectives they each have distinct characteristic probability distribution profiles.


http://eenadupratibha.net/Pratibha/Engineering-Colleges/Engineering-Jobs/engg.phys_content4.html

By no lesser a mind than John Wheeler lets call out this indistinguishability between fundamental particles for what it is: they are one and the same particle! https://thecosmogasmicperson.wordpress.com/2017/08/20/its-all-the-same-electron/ Perhaps not.

Let's now play with a bit of logic try to pretend to axiomatise our thought processes to see if this gives us any insight the fundamentals of indivisibles.

Logical Reduction


Consider now the syllogism:
  1. The (matter) Clumping of elementary indivisibles results in the loss of ("binding") mass-energy. 
  2. Matter (stuff) distinguishes itself by the clumping of its constituent particles. 
  3. Distinguishability (of stuff) results from the emission of mass-energy (in terms of radiation). 
The logic of the deduction seems sound enough, but a top-down argument to deduce the implications of a top-down program is perhaps tautological? Further, the interpolating is premised on a bottom-up induction: a posit that all elementary particles are indistinguishable (by definition?) and thus tied (being left invariant by some spatial or inner spatial group transformation) to each other by symmetry principles. The inductive insight (of the scientist) is the generalising symmetry principle of the indivisibles that enables us to extrapolate to all indivisibles. So was it inevitable that the inductive argument lead us to a process of reductive deduction? Or the other way around?


https://www.quora.com/

From this we can reflect on the following. That decay (from random emission) creates difference seems natural. That accretion through random emission creates difference follows from logical deduction seems a little less natural. Perhaps the answer lies in the predicate indivisibles? That fermions cannot reside in the same state renders them less indivisible than a boson that happily forms (Bose-Einstein) condensates. That we distinguish fermions from Bosons on the basis of their mixing statistics suggests the act of condensing (accreting,clumping) through boson exchange renders the primary predicate vulnerable to argumentation.


Are analogies a help or a hindrance in push to a final theory?

When is analogy profound and worthy of extrapolation and when is it merely useful for illustration?

The universe possesses substance, that is not just in being occupied by stuff it is substantive even when not apparently filled with what we would term things. That is, it is permeated by all pervading fields, where the permeation is not through space but is part of the noun of space itself. Such mixing of nouns and verbs must lead us to some quantum entanglement! We will look at a couple of analogies in the following of our universe, from trying to get to grips with the "stuffness" of its space, through to the effects of its cooling through expansion. Analogies take us so far.

Curve space time could be modelled as sunbathed metal plate (whose inner portion is partially in shadow) in which rulers across its middle will be shorter than those on outside plainly an illustrative analogy. Is the use of geo-mechanical optics and Stationary Action Principles extended to all the fields of physics just as debatable?

From Aether to Percolating Vacuum to Super-fluid Substrate


Einstein's Cosmological constant (some read as dark Energy or the percolating vacuum) is to be thought of as the "substrate" (old aether) of space-time that physicists have appropriated from Biologists. It is all a bit "like", but that is the nature of the explanation game that is theoretical physics. 

Growth Associated Protein

So with Michelson-Morley we used to think like Newton that stuff just sat in in a receptacle of space. Now like Leibnitz stuff resides (better word) within the stuff of space. I cannot get past the infinite regression of space having a stuffness. 

To gravitational waves. That space-time can fluctuate (in time) seems too tautological to be a helpful description. Is this the limits of language or is this the essential ambiguity to be unpicked?

Related perhaps is the following ambiguity. Through the Equivalence principle Gravitationally induced acceleration can be equated to inertial acceleration (over limited space-time intervals). So just as in marketing where the "Hoover" is a generic (brand) being both a noun (a vacuum cleaner) and a verb (to vacuum) the gravitational field is both a temporally extended event and mediation stuff of exchange.
Strong and weak versions of the Equivalence principle do not distinguish between this mixed branding exercise.
The other three gauge (exchange force) fields can not be afforded such branding ambiguity unless we entertain motion in an internal space as real motion. Similarly are we not affording the vacuum a "generic" status: as a Hoover, it is both the arena and event of percolating creation and annihilation of fields of stuff.

By my reading, in Penrose's Conformal Cyclic Cosmology theory space-time is just space in the era before inertial matter (particle era 10-3s?) appears as you require rest mass for time to tick. See E=hf=mc^2. Only when you have more than just light speed bosons knocking about defining causality, that is when stuff-laden fermions come into being does the clock really start to tock.

If the space is always stuff, its energy content needs to be all kinetic in these early moments (moment-less moments?) for the universe to start from the nascent zero Weyl (that is massless driven rather than Ricci-mass driven) Curvature that Penrose advocates: https://www.youtube.com/watch?v=FBfuAVBdcW0

Holograms and Conformal Cyclic Cosmology

In Penrose's Conformal Cyclic Cosmology the universe oscillates between conformal "intervals" in which the energy of the universe is dominantly kinetic and "ratio" eras (like now) when it is dominantly inertia (rest-mass) dominated. The universe will cease once again to tick with the final Hawking pop of the last black hole radiated photon. Without matter you have no measure of time as E=hf and E=mc^2 tell us.

Conformal, means ratios but not absolute lengths are measurable. Null light. Rays (of photons) demarcate the causal structure of space-time. They trace out timelessness.



The No hair theorem of Black Holes stylistically illustrated right:

That is Black Holes no matter how complex the processes were to make them can be quantified by the three moments (monopole and dipole of inertia-gravitation) momentum and angular momentum and (monopole) of electromagnetism. The Various routes to some convergent explanations are illustrated below, being mind map of Lee Smolin's book.

The conformal eras are the gravito-electromagnetic (massless) dominated eras. Perhaps these should be termed inertia-electromagnetic. That is given Einstein-equivalence principle we have emphasised gravitation (field) over inertia rather spuriously perhaps?



Finally then, if the universe is just freezing out its constituent forces by aligning itself along certain symmetries then when is the fifth force of modified gravity due to reveal itself?



In the above we see details of the standard epoch periods at which the decoupling occurs.
We have concrete data through CMBR of photon decoupling at 3000K (assuming spectral wave theory of atoms, redshift expansion etc of universe).

Another picture is that to the right:

We infer Cosmic neutrino Background coupling temperature (given neutrino notoriously weak interaction). Is there scope in the earliest epochs (as these are Energy regimes inaccessible to our accelerators) for further as yet unknown decoupling?

What is the post photon era that we live in now? Perhaps we could term it the gravitationally accrete fuse and excrete epoch or more catchily the "inertial matter epoch" when matter accumulated albeit transiently?

Coarse Graining Ambiguities


The Big, the Fast and the Complex all to arise from a genie theory that amongst (a few) other things will be able to fully describe Black Hole dynamics. Observe below the Genie theory of Physics, a yarn spun from string theory, if not from a knot from knot theory or a loop from that quantum loop theory, a twistor from...? Anyway, the kinematical theories of symmetry-group theoretical principles, the field and the concept of the canonical ensemble have their representations/realisations in the three pillars of physics (in orange) arenas which require knitting together.



Scale ambiguity is expressed in the mathematician's infinitesimal: in mechanics as the limitless desiccation of a time lapse into a dense set of instantaneous moments with stitched locations scribed as trajectory inputs for a Calculus of variations.

The Three Pillars of Physics


The three pillars of fundamental physics: Thermodynamics, Particle physics and General Relativity, at root have kinematical descriptions that hinge on the ambiguity of scale. How do their descriptive frameworks depend on the scale at which the theories are applied? Apply with precision and view their high energy microstructure or take a coarse grained view from afar. Respectively the 3 pillar descriptions hinge on the 3 Cs: the Canonical Ensemble, Coupling Constant scaling and Conformal invariance.



Canonical Ensembles as Localised Islands of Tranquility

Consider only the Ensemble. Thermodynamics, may be better termed Thermostatics, it's elements being the ensemble of system (static) states to which can be assigned Macroscopic "state" variables such as Temperature and Pressure. Such state measures are to be representative of localised aggregates (ensembles) that within ("intra") themselves are in equilibrium but between ("inter") are in flux. These ensembles are localised islands of tranquility.

That Is, the ensembles are individually homogenous enough in character as to be fairly wholly describable by such coarse-grained indicators of aggregate behaviour. In any reductionist program, these variables such as Temperature (say) are given a mechanism: the microscopic molecular kinetic energy of constituent "atoms".

Here lies the contradiction in this picture of stitched together intrinsically defined static ensembles.The static state quality assigned at a chosen coarse-grained view, implies no equilibrium exists between neighbouring islands. The island ensembles are dynamically transitioning according to the zeroth law, in mutual contact as they are at different temperatures. So take another observers less coarse-grained view of that same system of localised states deemed to be in static equilibrium and they will see rather states still in flux still dynamically moving towards localised equilibrium.

Is this merely a semantic distortion of a truer (fundamental) statistical mechanical reality or is it a convenient but artificial picture?

Is it any wonder that limitless continuity jars with discrete state ensembles and bundles of stuff and energy. The movement away from the Reductionist's preoccupation with the nouns of stuff to the verbs of process within (complex) Dynamical systems would seem a step in the right direction.


That a hologram (of merely a projected reality), the new fallout from Black Hole thermodynamics and AdS is a "thing" and not an "event" though suggests it may not be an end game.


Saturday 20 January 2018

On the Model dependency of astrophysical Observational facts

A recent discussion has led me to try to draw out the distinction between observational "fact" from the (potential house of card) models that best explain that observation.


Attribution of Truth from Fact often neglects the role of the modelling frameworks that underpin any kind of sophisticated observation in physics. Are we identifying facts in those regimes where we are conveniently shining a light?

The most idle of interpretations of a phenomenon may rest on a cascade of intertwined theoretical frameworks and modelling assumptions. The archetype is the interpretative computational edifice used to extract the signal from the noise in 2017 gravitational wave detection experiments.
At the relatively less prosaic level, consider the four simple observations that have relevance in the theory of Big Bang expansion:
  1.  spectral lines ( associated to model-postulated elements on a star's surface) are observed to be not in the same position as those elements on an earth bound lab;
  2. quasars and distant galaxies, as determined by the Cosmological distance ladder to be more distant by many distinct and partially overlapping inferential measuring tool parallax, laser, Hertzprung Russell diagram spectrum matching, Cepheids where it appears that Cosmological redshifts are proportionally larger than nearer visible objects.
  3. the sky is not full of star light- Olber's paradox.
  4. the CMBR is 2.6K

Roughly these are best explained by a collection of intertwined theories:
  • Quantum mechanical (QM) description of atomic line spectra whereby quanta of light through E=hc/l moving up (timelessly?!) through a gravitational potential have a longer associated wavelengths,thus marking off longer time intervals. This is explained by General Relativity (GR), through both its  "clock" redshift 
(http://slideplayer.com/slide/7101373/ )as well as a distinct different (and significantly larger) Cosmological expansion redshift. The theory leans on theories of stable Star formation and their subsequent generational history.

  • there was a super-luminal spatial expansion (probably at early time inflation) which has causally disconnected our vantage (as well as all others to their) point from light sources beyond a horizon. This effectively red shifted all the photon's energy away. GR with Cosmological constant assuming a perfect fluid energy-momentum source with inflation period explains delivering a Cosmological Redshift with increasingly accelerated expansion.
  • thermal radiation from the first decoupling of fermions and one of its mediating bosons- the photon. QM and Statistical Mechanics fits Data to a Plank black body distribution curve with the microwaves observed now.
http://www.astro.ucla.edu/

Any nascent theory in which observed "red shifts" are accounted for wholly by the "clock red shift" rather than the Cosmological one (or as well) have to tick all the other boxes and a lot more..


You could envisage the earliest galaxies/quasars being so dense as to generate huge clock redshifts so cosmological expansion is not needed, but you need a new model of stable star and galaxy formation to back up such unaccounted for densities. How this helps in explaining the other effects I cannot see.


Consider the natural unit system and Dirac's large number hypothesis.

thespectrumofriemannium

You could also envisage either planks constant, h or Newton's constant G being scalar field so that the following constants are number fields and not constant at all.

We have the simplest scalar-tensor theory extension of Einstein theory the Brans-Dicke-Dirac theory.
But you have got to hope that such such tweaks here don't poke holes over there. There is evidence that the higher multipole moments that these scalar-tensor theories predict in gravitational waves was not present in the 2017 observations.

A more fruitful route to uncovering the theory that the folks who are actually prepared to do the hard work but might just have missed (our amateur scientist motives after all) is to work through the house of card assumptions on which astrophysics is built. After all we cannot all scrutinise everything ourselves so we believe in the peer process to deliver the solid frameworks that will scaffold off our next ideas. The cosmic Distance ladder with its use of spectroscopy, statistical methods and intricate assumptions about the stability of stars within a framework of Classical MagnetoHydrodynamics is a candidate house of cards asking to fold.

Virial's Theorem can be unwittingly misapplied to systems that are in fact unbound. Indeed either possessing multitudes of unaccounted internal degrees of freedom through spin and/or the presence of an unattributed dark matter or through a miscalculation of its age and thus its stability.


Belief at one level, is a mental representation of an attitude positively oriented towards the likelihood of something being true. The Greeks give us two related concepts of: pistis referring to "trust" and "confidence", while doxa to "opinion" and "acceptance" from which the English word "orthodoxy" is derived. Their are a great deal of orthodoxies in Astrophysics up for debate.

What breed of curious scientist are you?

What attracts the curious budding scientist to a social Discussion forum?

Feynman is quoted as saying that if you cannot explain something (simply) you don't understand it. Ergo even the deepest ideas are accessible to those who dare to tread. But Goldstein (of the Boson presumably!) a colleague of him recalls:


"Feynman was a truly great teacher. He prided himself on being able to devise ways to explain even the most profound ideas to beginning students. Once, I said to him:


Dick, explain to me, so that I can understand it, why spin one-half particles obey Fermi-Dirac statistics.” Sizing up his audience perfectly, Feynman said, “I’ll prepare a freshman lecture on it.” But he came back a few days later to say, “I couldn’t do it. I couldn’t reduce it to the freshman level. That means we don’t really understand it.”



Categorising is part of the scientific method and in the following I will use it to try to delineate our various approaches (abilities) to gathering scientific knowledge.



Some of us further down the food chain have to consume on good faith the work of others. We all fail at some level to accommodate the absorption of new scientific knowledge into our worldview. It does beg one to ask for our motivations when applied to trying to comprehend explanations of fundamental (basic) physics.


What category of "quaternary" level scientist are you?


If as a first point of assertion I define the upper three categories of scientists as:

  1. Primary- you are producing the original work yourself and jousting with peers in hallowed university halls.
  2. Secondary- you are a reader of peer reviewed articles extracting what you can to feed and develop a line of reasoning that you have been following and are pretty near to joining the paper chase.
  3. Tertiary- you are a reader of reviews of peer-reviewed articles that do a pretty good job (you think) of encapsulating the nature of the source research article. Perhaps you enjoy comparing their review to how you perceive the paper (in its less than appealing targeted to a difference audience form). Perhaps you resist assimilation, perhaps you accommodate and adjust your interpretation of a concept or two.
What then of the quaternary type that inhabit these forums in their many forms.
Some better names and reclassifications may be in order:
  1. "defenders of the faith", a believer in the authority of the above peer-peer-peer reviewed process, whose duty it is to propound the articles of present understanding; perhaps overly self-assured in performing "science's" work;
  2. "the hopeful sceptic", a respecter of the authorities but who finds it intriguing that recent empirical leaps are revealing the limitations of sciences' " internal representations"; perhaps also a little sceptical of all the faux circumspection that is being practiced;
  3. "a logical negativist", one who finds it less interesting to measure up their ideas to any observational (or cutting edge interpretation of, at least ) reality;
  4. "a social deconstructivist", a realiser that science' scope has gotten too big for hugely biased (compromised) humans to piece together a satisfactory explanatory story. That "observation" is no longer merely through some physical apparatus but is processed and parsed through potentially partial, interpretative, contingent modelling (!), the cultivators of which are neither (the best) theorisers or experimentalists. That no man, as Milton Freidmann argues, really even noone knows now how to put together a pencil.
If you haven't connected with these definitions, ask yourself this "Would you find a debate on some scientific explanation with someone of orthodox faith less rewarding than with Edward Witten?


Indeed if you can dare to accommodate that which initially jars, you might accept that the whole education system from middle school, through high school and on to college is the slow revealing of fuller explanations of reality. That tiering of revelation keeps going beyond the school years. We all take on knowledge, process it, parse it and at different speeds with some barely getting off the starting line.

We are compromised humans beings which are capable of being more self-aware and reflective of our present limitations.


Related, I guess as these discussion forums could be taken as allegories for how science interaction might work I think I am becoming more of a "follower" of "social constructivism"' according to which individuals are more inclined to dupe themselves with their entrenched priors "assimilating" not "accommodating" new views.


According to Piaget's constructivism,individuals assimilate new information, splice it into their existing framework unchanged, when those new experiences are aligned with their internal representations of the world. That's ok but this can be because of a failure to change a faulty understanding by (choosing to) misunderstand input from others. More problematic is when that individuals' new data contradicts their internal representations, rather than be discombobulated by it, they change those (experiential) perceptions just to fit their internal representations.

Accommodation should be our aim, being rather the process of reframing one's mental representation of the external world to fit those new experiences. This failure to comprehend leads to new learning: that moment of appreciation that we held a false expectation that the world operates in a certain way and that way is actually violated.