ABSTRACT: Suppose that physically uncorrelated, morphically-resonant systems can implicitly make predictions of each other (I suggest some details), and that these predictions can impact the reality? Such systems will then need to take each others' predictions into account (and there will also be a recursive aspect), which will result in a lot of noise in their influence, but collective emergent-agent-system dynamical patterns popping up now and then. Qualitatively this seems to me like the kind of thing that might explain the various sorts of oddities we see in psi data...
The existence of psi phenomena is in my opinion relatively well demonstrated (see references here and many others); but we lack any decent theoretical explanation for how and why these phenomena occur. I have made some speculations in this direction (see this page and prior entries in this blog), but these prior ideas – as well as being quite loosely defined – don’t deliver any conceptually clear route to explaining the numerous peculiarities associated with psi data (such as the weakness of psi phenomena, the decline effect, psi-missing, and so forth).
The existence of psi phenomena is in my opinion relatively well demonstrated (see references here and many others); but we lack any decent theoretical explanation for how and why these phenomena occur. I have made some speculations in this direction (see this page and prior entries in this blog), but these prior ideas – as well as being quite loosely defined – don’t deliver any conceptually clear route to explaining the numerous peculiarities associated with psi data (such as the weakness of psi phenomena, the decline effect, psi-missing, and so forth).
In this post I will
climb even further out on the limb of my previous speculative psi-theory ideas,
and present some new sketchy conjectures about how psi might emerge from quantum-type
dynamics, if one modified quantum theory in appropriate ways. These ideas are aimed at giving a route to
explaining why psi has the peculiar statistical properties it does. Psi’s
various statistical oddities are taken as a hint as to what kind of mechanism
might exist at the underlying level.
For sure I do not give a completely detailed theory here --
just a sketch, which could be filled out in many different ways. But given the paucity of promising
theoretical ideas regarding the underpinnings of psi, even interesting new
sketches of theories should be considered of value, I think.
Commonalities Between Psi and Finance
Last week, while walking through the Beijing airport, I was
reflecting on the nature of psi results over time and some of their
peculiarities, e.g.:
- The data is almost random – but it’s persistently not-quite-random
- Extremely non-normally distributed, with a lot of mediocre results and then extreme results now and then
- A tendency for patterns to be very distinct for a while, and then largely disappear
- Dependency of patterns on multiple factors that are difficult to pin down, so that when a pattern disappears or diminishes or amplifies, it’s hard to tell what happened
First there is the basic observation that, while the
“efficient market hypothesis” would dictate that stock or futures prices should
be purely random (so that nobody can make money via trading in the long run),
in fact they are NOT QUITE random – they are just close to random, but with
various inefficiencies that do make them predictable. But specific predictive patterns tend to come
and go, and to get subtler and more sophisticated over time.
On top of its generally peculiar
barely-and-only-complexly-predictable nature, financial time series data also
has various famously odd statistical properties, which are referred to as
“stylized facts” in the finance literature.
(The charming term “stylized facts” was introduced by Nicholas Kaldor
for the reason that “facts as recorded by statisticians, are always subject to
numerous snags and qualifications, and for that reason are incapable of being
summarized” … so he felt that theorists “should be free to start off with a
stylised view of the facts – i.e. concentrate on broad tendencies, ignoring
individual detail”). Among these are the following (
see this page ; see also some other ones here ):
1. Absence of
autocorrelations: (linear) autocorrelations of asset returns are often
insignificant, except for very small intraday time scales ( 20 minutes) for
which microstructure effects come into play.
2. Heavy tails: the
(unconditional) distribution of returns seems to display a power-law or
Pareto-like tail, with a tail index which is finite, higher than two and less
than five for most data sets studied.
3. Gain/loss
asymmetry: one observes large drawdowns in stock prices and stock index values
but not equally large upward movements
4. Aggregational
Gaussianity: as one increases the time scale over which returns are calculated,
their distribution looks more and more like a normal distribution. In
particular, the shape of the distribution is not the same at different time
scales.
5. Intermittency:
returns display, at any time scale, a high degree of variability. This is
quantified by the presence of irregular bursts in time series of a wide variety
of volatility estimators.
6. Volatility
clustering: different measures of volatility display a positive autocorrelation
over several days, which quantifies the fact that high-volatility events tend
to cluster in time.
7. Conditional heavy
tails: even after correcting returns for volatility clustering (e.g. via
GARCH-type models), the residual time series still exhibit heavy tails.
However, the tails are less heavy than in the unconditional distribution of
returns.
8. Slow decay of
autocorrelation in absolute returns: the autocorrelation function of absolute
returns decays slowly as a function of the time lag, roughly as a power law
with an exponent β ∈ [0.2, 0.4]. This is sometimes interpreted as a sign of
long-range dependence.
9. Leverage effect:
most measures of volatility of an asset are negatively correlated with the
returns of that asset.
10. Volume/volatility
correlation: trading volume is correlated with all measures of volatility.
11. Asymmetry in time
scales: coarse-grained measures of volatility predict fine-scale volatility
better than the other way round.
12. Contagion:
transmission of crises from one market to another
13. Regimes:
Existence of long periods (e.g. months in daily time series) that appear as
“trending” or “volatile” regimes
How can we formally explain these various peculiarities of
financial time series? One quite
promising approach seems to be “agents models” – models of financial markets as
comprised of various predictive agents, each of which is making predictions
using its own variant of bounded rationality, on its own time horizon, and
using its own risk-return profile.
Agents models formalize the idea that financial time series
largely consist of: The result of a somewhat diverse bunch of agents trying to
predict one another’s predictions of future prices. This sort of agent system tends to produce
price time series that display the kinds of “stylized properties” generally
seen in real financial time series. The
routes by which the stylized properties emerge from agent systems have been
understood partially via formal analysis, and partially by computer
simulations.
Thinking laterally and analogically, this gives rise to the
question: Could the time series produced by psi results somehow be the
consequence of a population of agents trying to predict one another’s
predictions?
Certainly, the quirks characteristic of psi results are not
the same as the ones characteristic of financial time series – though there are
some overlaps. Actually I’m not aware
of any detailed study of the statistical quirks characterizing psi time series
broadly speaking – unsurprisingly a lot more attention has gone into the
statistics of financial time series (as, to understate tremendously, financial
data analysis is a bit better funded than psi data analysis)…. My impression is that in psi we also have fat-tailed
distributions, and regimes, and volatility clustering, and asymmetry of time
scales (for example) – but this would need to be validated via careful
analysis.
An Agents-Based Version of the Precedence Principle
The idea that PREDICTION might have a fundamental role in
psi brings to mind two connections.
First of all, many folks have argued that precognition is
the basic psi phenomena and all the others ensue from that. Indeed, every common psi phenomenon except
for macro-PK (mental movement of big objects) seems to be explicable in terms
of precognition, if one does enough mental gymnastics.
And secondly, there’s
Smolin’s Precedence Principle, an idea in theoretical physics that bears
significant relevance to Sheldrake’s notion of morphic resonance (as I’ve noted
here and in previous entries in this blog). While a somewhat imprecise notion, morphic
resonance has a clear conceptual connection with psi phenomena.
The key idea of morphic resonance is that, once a pattern
has occurred somewhere in the universe, it is more likely to occur somewhere
else in the universe. In an earlier entry in this blog, I have noted a direction for making this a more precise concept – the basic
observation being that, if morphic resonance is true, then the variance in the
distribution of patterns across the universe should be lower than one would
expect from independence assumptions.
That is, the distribution of pattern frequencies should be peaked
(pointy in the center) relative to a normal distribution. Unlike the original formulation of morphic
resonance, this formulation does not imply a direction of causation.
Smolin’s Precedence Principle states that, when something
has happened frequently in the past, it is more likely to occur again in the
future – as a matter of foundational physical “law.”
Smolin shows that, when this principle is applied to things
that have occurred very frequently in the past, then in the right mathematical
setting it provides a novel derivation of Schrodinger’s Equation. When the principle is applied to things that
have occurred sometimes but not yet that often, then it becomes a bit subtler;
and Smolin has suggested that perhaps future occurrences will chosen based on
the predictions of compact computational models of past data.
(Let me pause for a terminology/math note. I will refer to “probability” in the
following, in some places where it might be clearer to refer to “amplitude”. But I’ll keep using the word “probability”
for simplicity, especially since I like the Youssef formulation ofquantum mechanics where one replaces amplitudes with complex-number-valued
probabilities. So when I mention
probabilities below, just remember these probabilities may be real or complex
numbers)
So now we come to the punchline of this post: It seems
interesting to explore a version of the
Precedence Principle in which the probability
of some type of event occurring in the future, is determined by various
agents’ predictions of the event-type’s probability – where the various agents
are operating based on bounded rationality (due to having bounded spatial,
temporal and energetic resources), and making predictions on various
time-scales, and perhaps even using different criteria for measuring
prediction success.
Baked into this idea is that the different predictive agents
would need to take each other’s predictions into account, in making their
predictions. So we would have the
complex chaotic recursiveness that characterizes financial markets (plus some
additional complex recursiveness of a different kind, that I’ll get to below).
The rough analogy with financial markets would suggest that
this sort of framework might give rise to time-series of event-probabilities,
that are conceptually analogous to financial time-series – in terms of being
almost but not quite random, and in terms of having certain persistent
statistical peculiarities (“stylized facts”).
But who would the predictive agents be, in this
framework? The most obvious answer is
that, given a system S, any other system
S1 that is to any degree correlated(*) with S can be viewed as being in a
position to predict S (and consequently to influence S).
(*) I’ll explain how I
want to interpret “correlated” here a little later (hint: morphic resonance)
Here is one place I’m going to get a little bit
creative. According to quantum theory,
such a system S1 will have stochastic dynamics, i.e. it will evolve according
to a series of “random” choices, operating within constraints implicit in S1’s
constitution. One idea I want to
suggest here is: The apparently random choices
within the dynamics of a system S1 may often be BIASED in a way that implicitly
reflects an effort of S1 to predict states of another system S (with which S1
is correlated).
How might this bias manifest itself? One possibility would be that S1’s dynamics implicitly maximize some
quantity – but they maximize this quantity only if a certain prediction regarding
S’s future state comes true.
What quantity might fit into this slot? Two ideas that I came up with are:
- Entropy production
- Pattern creation
There is a Maximum Entropy Production Principle (MEPP),
which comes out of classical thermodynamics, but appears to also apply in some
form in quantum thermodynamics. This
says, roughly, that a system faced with multiple possible routes to change,
will often choose the route that produces entropy at the maximum rate.
I have also articulated a Maximum Pattern Creation Principle(MaxPat). This says that an intelligent
system in a natural environment, will often choose the route that creates
pattern at the maximum rate. MaxPat is
a fairly new formulation and I suspect it may hold more broadly than the
current argument suggests; but as with MEPP, the precise contours and extent of
the principle is not entirely clear.
There is definitely a connection between MEPP and MaxPat,
which remains to be fully understood.
We know that
One may want to view maximum pattern creation as the way intelligent systems in natural environments carry out maximum entropy production. On the other hand, one may want to view maximum entropy production as a crude average view of what happens when a lot of little, slightly-intelligent agents do their best at maximum pattern creation. There are deep issues here in need of sorting.
- Creating pattern tends to involve creating entropy, in practice … making a sculpture generates heat and piles of waste, etc.
- The entropy of a system is (in a sense) the average algorithmic information of the trajectories it contains
One may want to view maximum pattern creation as the way intelligent systems in natural environments carry out maximum entropy production. On the other hand, one may want to view maximum entropy production as a crude average view of what happens when a lot of little, slightly-intelligent agents do their best at maximum pattern creation. There are deep issues here in need of sorting.
However, my main point in this post is somewhat independent of what
quantity is being maximized. The point
is that if the random choices in S1’s dynamics are made so that some key
quantity is maximized only if S’s evolution unfolds in a certain way – then we
can say that S1 is “implicitly predicting” S.
Implicit prediction doesn’t require psi or anything
spooky. So long as S and S1 are correlated in the
ordinary physics sense, it can happen as a result of “physics as usual.”
However, things get really interesting if one counts S1 and
S as “correlated" even when they are:
- Uncorrelated or only very, very slightly correlated according to ordinary physics, BUT are --
- Connected via having common patterns in their structures or dynamics
THEN one has spooky implicit prediction … one has a
potential underpinning of psi. Note that
item 2 is basically good old Morphic Resonance, rearing its unstoppable morphic
head once again. But it’s occurring in
a very special place here. The
hypothesis is that when two systems S and S1 have common patterns, they
“morphically resonate” in a specific sense – S1’s dynamics will tend to get
stochastically biased so that they yield critical maxima if S’s dynamics unfold
in particular ways.
Systems unfold over time in ways that implicitly constitute
predictions of other systems with which they are morphically resonating. But many different systems S1, S2, S3… may
morphically resonate with different aspects of a given system S, so we may get
many different predictions, all of which may take each other into account,
resulting in a complex and noisy perturbation of whatever state S would be in
without all that morphic resonance influencing it. This complex, noisy perturbation usually
looks a lot like noise – except when it doesn’t. The statistical properties are going to be
messy and intriguing.
That is: When one has multiple systems S1, S2, … all doing
spooky implicit prediction of the same S – then one has a complex situation
where the various systems all need to second-guess one another’s predictions as
they make their predictions … which is a situation vaguely reminiscent of what
happens in the financial markets, where such inter-agent predictive
interactions give rise to near-randomness marked by numerous weird statistical
quirks.
A note on causality
and temporality.
It’s worth noting that the above formulation does not make
any reference to notions of causality, nor any assumptions about “flow of
time.”
Intuitively, one can interpret this sort of situation to
imply: That the actual probability of S having event E at time t, is influenced
by the predictions of the various systems Si that are correlated with S (the
predictions regarding whether S will have event E at time t). But note that this intuitive interpretation
is leaping from the asymmetry of “entropy or pattern increase” to the notion of
“influence.”
Back to the Precedence Principle!
Tying this back to Smolin, one way to look at the
hypothesized dynamics is that the
Precedence Principle works, not on a universe-wide level, but within each
individual observing system!
(i.e. Precedence
Principle meets Relational Interpretation of QM)
I.e., we are saying that each system S1 that is correlated with S, implicitly adjusts the random
fluctuations in its dynamics based on the expectation that the prior patterns it has observed in S will continue into
the future.
This is just another way of slicing “morphic resonance”,
because the patterns S1 has observed in S in the past are likely to be the
patterns that S1 and S has in common, i.e. the source of the morphic resonance
between S1 and S.
Amplifying Small
Morphic Resonances
If the patterns in S being predicted are the ones that S
shares in S1, then the result of S1’s prediction being correct might be that S
and S1 would, in future, have the same patterns at the same time. That is: One consequence of this sort of
dynamic might be that the maximization-based adjustment of S1’s dynamics (based
on implicit prediction of S) would increase the odds that S and S1 would
continue to have common patterns in future, i.e. would continue to “morphically
resonate” in future.
Thus, what we have here is, in a sense, a theory of how
morphic resonance might work. If we
assume a bit of morphic resonance, and assume impactful implicit prediction
dynamics based on morphic resonance, then as a result we obtain a bit more
morphic resonance.
What Is a System?
In trying to clarify these ideas, we also face the question
of “what is a system”? I.e. one can
partition the mass-energy correlated with S in many different ways.
Do we want to consider every possible subset of this
mass-energy as a system S1, making its own predictions that are then
incorporated in determining S’s dynamics?
Perhaps we do. But intuitively, it
seems to me that more coherent systems S1 should be counted more than essentially
random, disconnected collections of mass-energy quanta.
Perhaps we can measure the coherence of a system S1 using
its “quantum integrated information” (see http://integratedinformationtheory.org for the general idea due to Tononi; and this paper for the quantum version specifically, worked out together with Tegmark )
– a measure of the amount of emergent
information in a quantum system, which amusingly has been proposed by Tononi
and Tegmark as a measure of the degree of consciousness in a system. (I think there is a lot more toconsciousness than that; but I still think the measure is interesting.)
Summary So Far
To summarize, then, I hypothesize that perhaps:
- the state of S is influenced by the predictions of the state of S made by systems S1 correlated with S (including S1 that share common patterns with S, thus are correlated with S via a morphic resonance hypothesis), where S1 makes predictions via a local Precedence Principle guiding its pursuit of maximization of some appropriate quantity (e.g. entropy or pattern creation)
- the degree of influence of S1 on S is proportional to
- the degree of correlation between S1 and S; and also
- the degree of Integrated Information possessed by S1 (or perhaps, the degree of II possessed by S1 and S considered as a collective system?).
Note also that a single system S1 may be involved with
predicting many other system S. This is
not necessarily contradictory, but of course S1 only has a certain amount of
information to throw around, in terms of the internal “random or spooky”
degrees of freedom in its dynamics. So
if the other systems S being predicted are highly uncorrelated with each other,
S1 is not going to be able to predict a large number of them, unless they are
much simpler than S1.
This is somewhat of a complex conglomeration of ideas, but
there is an obvious theme underlying the components: It’s all information
theory. The information-theoretic
nature of the hypothesis resonates nicely with various previous observations of
connections between entropy and psi.
The Precedence Principle in its simple form provides a
sort-of “mechanistic” underpinning for morphic resonance. But the observing-system-relative
Precedence Principle proposed here, feeding into an ensemble-based dynamic of
event probability prediction, seems potentially capable of something more as
well -- providing an underpinning for the peculiarly elusive and tricky and odd
statistical properties of morphic resonance in our universe.
Let me recap some of the motivations that led to this set of
hypotheses:
- My goal in conceiving these ideas was to propose a model in which the probabilities (or amplitudes) of events in the world are determined via the combination of the predictions of these probabilities by a bunch of different agents, predicting based on different biases due to their different natures and histories, and often predicting on different time scales.
- For this to be a sensible physical theory the “agents” involve have to be physical systems correlated with the system experiencing the event being predicted; and the predictions have to be implicit in these systems’ dynamics.
- One way to define an implicit prediction by a system, is to say that the system would maximize some physically important quantity if the prediction came true.
- To get a suitable variety of psi-type phenomena out of this, we'd better let "correlated" include correlation via morphic resonance as well as correlation via ordinary physical coupling ... otherwise we'd only get a kind of local precognition out of the implicit predictions...
Recursive, Polyphonic Reality?
But, the above doesn’t quite convey the thickness of the plot
I’m hinting at here…
The clever reader may already have limned the
recursive aspect of what I’m proposing....
It’s funky, eh?
While
- S1 is evolving in a manner implicitly oriented to predict S, and S’s state is influenced by the predictions of various other Si that are correlated with it – at the same time,
- S is potentially evolving in a manner implicitly oriented to predict S1, and S1’s state is influenced by the predictions of various Sj that are correlated with it.
And so we have a system of simultaneous equations, involving
a network of agents (systems) that are all trying to predict each other, and
whose states are all influenced by each others’ predictions of said
states. The result of this crazy
recursive maze of predictions may generally look a LOT like noise – but it
isn’t exactly noise, it has certain biases and peculiarities.
Due to this recursive aspect, what I am proposing here is
fundamentally more complex than the situation with financial markets. In the financial markets case, the agents
involved are typically trying to predict the same set of external time series,
and are predicting each other’s predictions only in this common context.
On the other hand, the dynamic I propose here is more like a
weird kind of financial market, in which each trader’s bank balance is treated
as a financial instrument, and the various traders are each concerned with
trying to predict the future states of multiple other traders’ bank
balances. I would suppose, though, that,
if one simulated this odd sort of financial market, one would find a set of
stylized facts somewhat similar to (but probably extending) the ones observed
in typical financial time series.
One could argue that the treasuries of various countries, in
their interactions with each other, display a vaguely similar dynamic to the
one proposed here. Each country is
trying to predict the future values of each other country’s currency, and make
trades on these predictions … and the value of each country’s currency is
influenced heavily by the predictions made by the bankers associated with other
countries’ treasuries…. But of course
this is only a loose analogy.
Obviously, what I’ve proposed here is more an idea for a
theory – or a pointer toward a category of potential theories -- than a
particular, well-fleshed-out theory.
There would be a lot of concrete choices required, to make these ideas
quantitative; and/or to apply them in a careful way to particular psi
phenomena. What I’m aiming to do in
this blog post is merely to outline a TYPE of theory that seems to me likely to
provide a scientific grounding for psi phenomena.
In philosophical terms, one might call this a “polyphonic”
model of reality (I got this term from Bakhtin’s analysis of Dostoevsky, btw). One is viewing reality as neither objective
nor subjective, but as a sort of blend of multiple agents’ subjective
views. The blending takes place on the
level of predictions that are implicit in system dynamics; and one of the core motivating
observations is that blending predictions made by multiple bounded-rationality
agents on multiple time-scales can give rise to time series that are almost but
not quite random and that have non-Gaussian distributions, patterns that come
and go in sudden and surprising ways, and other odd statistical properties that
are broadly reminiscent of what is observed with psi phenomena.
(Well OK then – this
is either ultra hi-fi sci fi; or a penetrating intuitive leap guessing the
outline of a scientific theory that will ultimately be validated N decades from
now after I or someone else does a lot more work. Or else it’s just a lot of
complicated-sounding words in sequence; one more meandering tale told by one
more evolutionarily degenerate ape-chimp-thing, full of sound and fury,
signifying nothing…. But hey, in any
case, thinking about this stuff has been a heck of a lot of fun!)
Regimes and Knots
But wait – there’s more! …
One of the peculiarities (er, stylized facts) of financial
time series is the existence of distinct “regimes.” A market will trend up for a while -- then it will start to get volatile for a
while – then it will trend up or maybe down for a while … etc. These regimes are often fairly clear to
identify in hindsight; but predicting regime shifts in foresight is one of the
hardest problems in the financial prediction arena.
Financial market regimes occur, it seems, as a result of
emergent patterns in collective behavior of multiple predictive agents. In some cases the underlying agent dynamics are
relatively well understood – for example, bubbles arising and bursting. Sornette has parsed out the mathematics and individual and collective psychology of bubble phenomena quite nicely.
The run-up to a “market bubble” is caused by some agents
becoming overoptimistic, then other agents predicting these agents ongoing overoptimism,
etc. Until enough agents decide they
have recognized the pattern that the bubble is about to burst – and then things
get chaotic, because some agents are predicting “up” based on the same old
overoptimism predictions, but a lot of others are predicting “bubble is about
to burst.” Then finally a critical mass
is reached and there are more predictions of “bubble is about to burst”, which
causes the bubble to burst. And then
the agents start to predict doom, and start to predict one another predicting
doom and this causing doom, and this causes doom for a while. A root cause here seems to be the presence
of a bunch of agents who are prone to make overconfident predictions based on
too few data points (and we know that real-world financial markets are full of
these).
In the multi-agent model of morphic resonance I have
proposed here, various similar phenomena would seem likely to occur. Multiple systems coupled together and
predicting the same set of systems, will likely get into “collective behavior
patterns” manifesting themselves as “regimes” with characteristic time-series
property signatures.
The nature of a regime is that it’s resistant to small
perturbations – e.g. when a market is trending up, a bit of volatility here or
there tends to get smoothed over quickly by the general overoptimism. This suggests an analogy with “metaphorical knots”
in minds and other complex systems, as I have described in a previous post on this blog. It seems that metaphorical knots could
potentially emerge from the dynamics of multiple inter-predictive agents as
proposed here.
For instance, consider a knot of the form “I am afraid of
love, so I’ll hide my loving feelings; my loving feelings being hidden causes
them to be unknown, which causes me to be afraid of them, since unknown things
are scary.” This could emerge, for
instance, from a combination of:
- Agents that predict that bad things may happen as a result of loving feelings, based partly on evidence that loving feelings are hidden and therefore unknown
- Agents that predict that loving feelings will be hidden in future, based on the fact that they’ve been hidden in the past
If the dynamic is that
predictions affect reality, then
- Agents of Type 1 will cause bad things (at very least bad psychological things) to happen as a result of loving feelings
- Agents of Type 2 will cause loving feelings to remain hidden (thus providing more grist for the motivating belief systems of Agents of Type 1)
and the system including these agents will be “stuck in a
rut.” The rut may start because of
hiding loving feelings in one particular situation… then it may continue
because of the dynamics of the agent system, including the element of
stochastically self-fulfilling predictions.
To the extent that there are nonlocal correlations between
physically distant subsystems of the universe (e.g. via morphic resonance as I've considered above), some networks of such subsystems
could then get locked into complex regimes and knots via mutual predictive
activity. In a eurycosmic model these
knots could have portions in the physical universe and portions outside in the
eurycosm. Knots involving the
predictive agents associated with an individual mind, could have
mutual-reinforcement relationships with knots involving the predictive agents
involved with broader aspects of physical reality with which this individual
mind interacts. We have no experience
studying or modeling this sort of complex dynamics; but based on other complex
systems we have studied, its not hard to project the sorts of complex hierarchical
and heterarchical networks of strange attractors (or similar phenomena) that
might occur.
Could knots like the above on the collective level be
responsible for the decline effect?
It’s certainly feasible. In
financial markets, once too many agents expect something to happen, others
often start to expect that others will expect the opposite to happen, and then
will predict the opposite will happen – which causes the opposite to
happen. Something similar could happen
regarding predictions associated with multiple unconscious minds correlated
with psi experiments.
These ideas are weird and complicated, but in the end they
all seem to be explorable using conventional scientific methods. They do imply the possibility of
“experimenter effects” in which the predictions that experimenters (implicitly
or explicitly) make regarding experiments, affect those experiments. However, understanding the dynamics of prediction
and psi at a layer below the experimenter effect, may allow us to design
experiments in which this sort of effect can be carefully controlled (much as
the effect of observer on observed in quantum mechanics can be taken into
account in experimental designs, because we understand how it works pretty
well).
Indeed this stuff is complicated -- but humanity has pulled
off a lot of other tremendously complicated and conceptually and technically
difficult things, like making the laptop I’m typing this on, and the Internet
via which I’m conveying it to you….
Combine these ideas with this: https://en.wikipedia.org/wiki/Quantum_pseudo-telepathy
ReplyDelete"Heavy on theory, light on practice."
ReplyDeleteYou'll understand the ocean a lot better if you go swimming in it once in awhile. Put those books down.