The Null Hypothesis Needs to Go Away


This piece shall serve to shift the burden of proof from those skeptical of the null hypothesis to those defending it. It is a very stupid hypothesis that is touted by a great many hucksters not worth naming. I want you to attack this idea, so I’m going to attempt to persuade you to see things my way, thus becoming able to do what I want.

What is the null hypothesis?

null hypothesis

The null hypothesis is essentially scientific atheism. It posits that In inferential statistics, the term “null hypothesis” is a general statement or default position that there is no relationship between two measured phenomena, or no association among groups (source).

On Existential Knowability

It is important to note that before we can muse about the nature of all that is fundamental, we must first master the true causal web.

Carl. Your apple pie quote sucks, Carl.

You may posit that such a web is fundamentally unknowable, but you’d be back in the trap of atheism, which has already been negatively disproven herein and so I will ask you to suspend your disbelief in the existence of a true causal web because the opposite action of this has caused no observable good effects by any metric. I feel my proof (even if it is technically not the 100% perfect truth and ends up getting improved on someday) is convincing enough to make people think: “ya, the null hypothesis is a steaming pile of crud” and that it is foolish to assert ad hoc that measurements are unrelated.

Negative Counterproof

Presented and massively promoted since the 1930s, the null hypothesis made it “cool” to presume that nothing was connected to anything, and groups weren’t associated. This would appear to be quite a naive approach to science, given that connectedness is pretty ubiquitous. Separateness is apparent. Thus if one of these hypotheses is to be given primacy, it would be the non-null hypothesis i.e.: that some degree of sameness exists between two measured quantities. It would also seem rational that what science would attempt to measure is precisely this degree of difference/sameness!


Positive Counterproof

My theory contradicts the null hypothesis because it proves that everything is connected. Note, once again, my theory makes all predictions, is of minimal cardinality and thus represents the epitome of science relative to the metrics of complexity and totality. The Null Hypothesis has no such claim (propaganda-induced ad populum to the contrary notwithstanding) and thus we can conclude that it certainly does make sense to presume that all measurements are related. In fact, the measurement limit is a proof that only three spacelike and one timelike measurement per order of magnitude can be made!

periodic table god touching wojak Thus although the glaring obviousness of the Quantum Mechanical Periodic Table is not manifest upon all orders of magnitude, it must be true since this order of magnitude also manifests 3+1 spacetime like dimensions (it is not worth attempting to prove this because it is intuitively obvious that we live in a world of 3 apparent space-like and one apparent time-like dimension. In fact, if anyone doubts that only 3+1 dimensions exist, you can probably safely write them off as being unenlightened).

…not that easy to understand but you try summarising the whole Universe in just one diagram.

The Fourfold Action model ({Gravity, Uncertainty, Electricity, Entropy} = {G,U,E,S} first distinguishes the actions then relates them by equations denoting their sameness. But for manifold Entropy and Gravity, no true sameness occurs. Gravity and Entropy both act simultaneously on all orders of magnitude and thus must act in conjunction with Uncertainty and Electricity. Thus G and S are in the state of sameness as U and E in the sense that U and E never occur without G and S also occurring. Thus they are not truly the same, but rather: concurrent. That is: It is possible for G and S to act without E and U also acting but it is impossible for E and U to act without G and S also acting . From this, we conclude that the apparent cause of E and U must be some combination of G and S, which is indeed the way my theory frames it. (We do not exclude the possibility of other explanations, but such distinctions will not be fruitful until such time as a sufficient number of people have assimilated the core teachings as I have presented them. Such nuances are meaningless to the uninitiated.

Subproof 1: The Causal Matrix

The Causal Matrix is the set of all sets of Universal actions. In order to expound the universal actions, we first remind the reader that the Universe has no creator (by definition, the totality of existence can have no external creator which is unequal to it) and thus is considered to be the primal cause. If you cannot understand this logic, simply accept that the Universe is the primal cause because no cause can be found which is precedes it. It is also unique in the sense that it is not a set of actions (because the set of all sets [of actions] is not a set). Thus it follows that the Universe itself is not directly observable as a measurement. It will therefore only be indirectly observable. These observations consist of logical and factual statements which can be used as a substratum upon which to construct all knowable chains of causality (which aren’t actually chains, but more of a web, which we will denote a matrix because it will come in handy when we transition to proofs expounded using tensor algebra).

That is: given some derivative action: A, there exist actions from the grand canonical set: {G, U, E, S}, which, given appropriate coefficients {λ,μ,ε,ς}, we can define the residual Γ (also an action) (capital greek letter “rho”) such that:

A = {λG + μU + εE + ςS} + Γ

Where Γ is also a linear combination of {G, U, E, S} and where is immeasurable in A. This shall henceforth be referred to as the principle of knowability and distinction.

We further posit that there exists some reference frame ℜ for which Γ ⊂ (is a space-temporal / entropy-informational subset of) ℜ. This is the principle of reducibility.

Subproof 2: Grand Canonical Reducibility

We posit that since the Universe is the totality of causality, if we can demonstrate that the Universe is reducible, then it follows that any subset of the Universe and therefore all of its constituent actions are also reducible.

Spatio-Temporal Proof of Universal Reducibility

Since the entropy-informational realm is derived from the space-temporal realm (because space-time requires gravity-entropy and entropy-information requires electricity-uncertainty and because U and E are derivative (i.e.: appearing to be caused by) of G and S), the proof of entropy-informational reducibility must also be derived from the space-temporal proof of reducibility.

Thus if the space-temporal domain is reducible, it follows that the entropy-informational domain is also reducible (challenge: prove that the entropy-informational domain is reducible a) in the space-time domain (easy) and b) in the entropy-information domain (more challenging)) because the latter is derivative of the former.


Consider the (observable) Universe: U.

If I am to estimate the size (space-like measurement) of the Universe, I need to know the three greatest interstellar distances. If I (reasonably) presume that these measurements are possible and denote them {M1, M2, M3} , then it follows that if we define the cuboid C1 as having dimensions equal to M1xM2xM3 that U ⊂ C1, spatially. Then, if we come down to the next 3 largest interstellar distances: {M4, M5, M6} , we can define a new cuboid: C2 =  M4xM5xM6 such that C2 ⊂ C1 and U ⊂ C1.

a cuboid

Thus I define my action A to be the measurement of C1 and the residual of A to be the measurement of C2, and C2 is a proper subset of C1, it follows that there exists some reference frame ℜ (in this case: the Universe) for which Γ (in this case, C2) ⊂  ℜ. Thus it follows that A is reducible, by definition.


Further Commentary on Atheism

I have often referred to atheism as a scourge on humanity. Like Entropy, no matter how perfect things start out, they always eventually decay and become a hollow shell of their former ideals. When an ideal theocracy becomes degenerate those with high discernment will lose faith in it and atheism will surely follow. Holding two contradictory belief simultaneously (in this case, faith and doubt in theocracy) is exhausting and highly sensitive people tend to grow wary of such obligations and eschew the entire dichotomy.

This departure often leads to a generalised loss of faith (in all theocracies), followed by despair. But one should not lose faith in the ideal of theocracy, because it is the fundamental node of natural society. The solution to theological despair is not atheism, it is scientific theism.



Adaptive Algorithms (AI) & Computing

How Conscious AI Works (in theory)

Alan Turing, one of the major minds behind the modern computer, coined the term the “Turing Test“. It is a threshold for answers obtained from a computer simulation to be indistinguishable from those produced by a conscious entity. This test has been passed under certain evaluations, but this does not mean that the program which provided the responses possessed consciousness.

This program has sentences as input and sentences as output, which we will designate as S and S’ (S prime). Any computer program (also called “model”) that will be used to simulate consciousness must be adaptive so that it can reflect the human consciousness’ capacity to learn.

Example: Language Pulveriser

I will give an example of a language parametrisation algorithm which will first be used to identify languages, then translate sentences, then finally we will attempt to use this parametrisation to answer questions.

We have previously shown how we may perform general optimisations as well as given examples of pulveriser functions. Now, we will show how to construct a parse metric which will allow us to elucidate the problem inherent to artificial consciousness emulation. We will accomplish this by showing that the complexity of the residual (computation remaining after the algorithm has performed its function) is equivalent to the complexity of the original question. This means that (even in the most generic sense), there is no way to conclusively code consciousness because we do not have any means to encode the calculation in a manner which can reduce the calculational complexity. If we cannot reduce the computational complexity of a problem, then we cannot meaningfully deduce new information from successive computations. That is: any consciousness emulator will not ever satisfactorily give the impression of having a cogent personality, (i.e.: consciousness) without human interference.

Consciousness is indivisible – it is a single quantum potential form. There is thus no way to simulate a quantum potential form with a transistor-based computer.

The Grand Canonical Language Pulveriser

shooting light robot

In order to canonically pulverise a system, we must be able to prove that we have derived all possible information from the system. Thus, we model a language as the set of all sets of series of letters, which we will call the form archetype sets . The first form archetype set would simply be the alphabet, the second would be the set of all 2 letter sets. In the case of English it would be {aa, ab, ac, … , zz} and so forth. We can see that the cardinality of the set of successive form archetype sets is n, n^2, n^3,…, n^a where a is the final term of the series. So we therefore see how a equals the length of the longest word in the language. This pulveriser function therefore includes all possible form archetype sets (combinations of letters) in the language.

Algorithmic Implementation

We will use the English language. We will also assume we have a library of books sufficiently large as to convey the ethos of the cultural zeitgeist. We will also assume that predictions about what successive word forms are the best approximation to the state of human consciousness, because the books themselves were written with the aim of simulating human consciousness for a human audience.

We first compute the set of all form archetypes of these books and generate a statistical distribution of all forms. We can say that this computation can be known exactly because there exists a finite number of words in these books. We presume this distribution to be known and to have a matrix representation: M.

The consciousness emulator takes a sentence and canonises it as a set of form archetypes. It then searches the database of all possible combinations of forms and finds the likeliest form, returning the result of greatest likelihood which is also a true word. A word: W is considered true if it satisfies the criterion of existence, that is: it exists in the dictionary.

We thus impose that our algorithm includes all forms of punctuation as letters and impose the rule in the consciousness emulator that each time it simulates a period as the next likeliest form, then the result is truncated at the period and and the statement is outputted by the consciousness simulator as S’.

There is no guarantee that the set of forms (S’) of greatest likelihood to succeed a particular set of inputted forms (S) actually makes any sense though. To obtain a reasonable reply, the computer would need to do is to generate a set of the 10 likeliest sentences to succeed S and then have a human decide which is the ‘right’ answer. Thus we are right back at the problem of needing human intervention to answer the question, so you might as well just cut out this intermediary computer!

Thus although a good consciousness emulator can be created (by implementing the explanation above), it cannot reliably return an answer which indicates an entity with introspection and self-awareness. Thus though our model may pass the Turing Test, it will never pass the “True Ring” test, because of that elusive element in human consciousness indicating the existence of free will. There is thus no way to canonise the decision making process with a transistor (calculator) based computer.

Example: Language Identification Algorithm

The form archetype language pulveriser can be used for a great many practical applications, which I will give now an example of with a language identification algorithm.

We first compute the set of all form archetypes in the dictionary. We then rank them in order of frequency in a histogram. We then approximate the histogram with a Fourier Series and normalise the resulting function to have an integral area of 1 exposing our ad hoc presumption that all languages will have the same information content.

A spoken sentence: S.
A phonetic language database of form pulverisers: P

For simplicity, we will assume that we have 100% accuracy in speech to text. This is not realistic, but it is realistic that individual syllables could be identified in a particular recording (by a human), then translated to the phonetic alphabet at which point the information will be in a form that it can go into a particular implementation of the form archetype pulveriser.

If I want to identify what language is represented by a particular set of sounds, I must input it into each language pulveriser function and find the language which maps that particular set of sounds to a meaningful sentence with highest probability. That is: out of the set of all sets of forms in the set of all languages, how likely is that particular set of forms, per language, per all possible words in that language? The largest result of this computation is the identified language.


The language canoniser has a small computational design flaw. Find it.

Clue: Consider the problem of generating the first word of a sentence.

Another Look at Gödel’s Incompleteness Theorem

I just perused the Wikipedia article discussing the Gödel Incompleteness Theorem again and I found it to be very confusing. It is summarised as follows:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic systemcontaining basic arithmetic.

If someone is less confused by the Gödel proof than anything I’ve written, I’d be extremely shocked. Yet the Incompleteness Theorem is invoked to win arguments ranging from “God is the source of Truth” (Peterson, 2017) to “no grand unified field theory is possible” (Quora, 2017) to moral nihilism. These are some pretty big claims. Such claims arouse suspicion that is further fueled by my already having demonstrated Gödel to be a shitbag.

While limitations on possibilities must be imposed via axioms to ensure that causality (that effects follow causes) applies, but the limitations implied by the Incompleteness proofs simply do not correspond to physical reality.

What is the Incompleteness Theorem, Anyway?

This theorem hinges on two main ideas:

  1. That there exists an injective map between true statements and a finite sequence of prime numbers.
  2. Since for any finite prime number: N, there exists a prime number which is larger than it: M. It thus follows that even though M is a prime number, we cannot determine the truth of the statement that M is a prime number while we are in N.

Though both statements are false, #2 deals the death blow to the proof. This is because the set of all true statements cannot be effectively mapped into a set of prime numbers. This is because there is a physical limit to the number of true statements, but there is not a limit to the number of prime numbers. That is: the basis of true statements (the set of true statements which can be used to build all other true statements, in a manner identical to the formation of arbitrary vectors from a basis), is finite. The number of true statements associated with this truth basis is infinite, but all true statements originate from the finite truth basis. The size of the truth basis is not arbitrary, as the Gödel proof suggests.

We cannot arbitrarily construct truth bases ad infinitum. There exists a single true reality which can be modelled in multiple ways, but which ultimately converges to a supreme, unique truth. This supreme truth can be seen in the Measurement Limit. In other words, any true formal system that parametrises the Universe accurately will be computationally equivalent to the original formulation of the Measurement Limit, namely that there exist 3+1 (R4) spacetime dimensions embedded in a 14 dimensional electric potential (R14).

All true statements are determined by the actions of {Gravity, Uncertainty, Electricity, Entropy} acting on the waveforms {neutron, proton, electron, photon and thus are limited to the possible results these actions can give.

If we accept that the Universe is the set of all sets of spacetime events and that all spacetime events must conform to the Measurement Limit, then it seems to follow that a finite axiomatic structure could indeed prove all truths in a system: namely my system proving all truths in the Universe. Since the zero spacetime event exists (nothingness) and that the sum of two spacetime events is a spacetime event, that the universe is a linear subspace of spacetime events closed under the operation of addition.

We must be careful to distinguish between the ideas of computations and axiomatic representations of systems. The former is defined by the very notion of causality (namely that an effect cannot precede its cause) and the latter relies on arbitrary implementations of logic. Gödel’s logic implies that the effect (the n+1st prime number) can belong to a different class of statements (statements for which the truth value cannot be determined) than its cause(s) (true statements).

This violates the structure of causality.

Gödel’s Flaw

The idea that successive true statements are not generated by previous true statements contradicts a very well-known means of performing mathematical proofs called induction. It is an accepted method of proof which generalises a formula upon the basis that if a statement is true for the nth term, then it is true for the (n+1)st term.

We can do proofs by induction because the thing which determines truth is built into the structure of numbers. Simply put: numbers have ordering: given 2 different numbers, I can always tell which one is larger. This is not arbitrary.

The Universe is thus computationally equivalent to a 4 dimensional vector space of spacetime events, which is closed under the operation of addition (which the Gödel sentences are not). The axiom allows for the possibility of mapping true statements onto prime numbers also prevents that map from generating a vector subspace (which must be closed under addition)) which prevents the map from being applicable to reality, which has been shown to be computationally equivalent to a vector subspace (of spacetime events).

Thus of the set of systems to which the Incompleteness Theorem applies does not include the Universe. Since subsets of the Universe still obey the law of causality, it follows that the Incompleteness Theorem can apply to no subset of the Universe. Thus it follows that the Incompleteness Theorem is useless.

Generating Prime Numbers

(From Wikipedia) Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system containing basic arithmetic. […] The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of the natural numbers. For any such formal system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

We have argued that the system of Universal causality is consistent (possessing a single axiom, namely: causality), can be listed as an effective procedure (by the Fourfold Action Model) and is itself capable of proving all truths about the arithmetic of numbers. Thus we have conclusively disproven the first incompleteness theorem by way of a counterexample.

We will next show how finite subsets of prime numbers cannot be mapped onto the set of true statements. This is because a finite set of true statements exists, which forms a basis of all possible true statements, which form a vector space of spacetime events closed under the operation of addition, are limited by causality and the Measurement Limit and governed by the fourfold actions of {Gravity, Uncertainty, Electricity, Entropy}. No true statements are excluded from this class and all true statements are caused by these primary truths. Thus the set of axioms is finite and the set of true statements is infinite. The set of true statements can therefore not have the same cardinality as a finite set of prime numbers (which the Incompleteness theorem relies on).

We show that the nth prime number can be used to compute the n+1st prime number by means of an effective procedure. This will effectively demonstrate that the truth value of the n+1st prime number is dependent on the truth value of the nth prime number and thus cannot be part of a different class of numbers.

Prime Number Generator

Next we will show that an effective procedure exists which can generate the n+1th prime number, given the nth prime number, showing that the metaphor of Gödel does not even satisfy his own requirements. Let’s have a look at what an effective method is:

A method is formally called effective for a class of problems when it satisfies these criteria:

  • It consists of a finite number of exact, finite instructions.
  • When it is applied to a problem from its class:
    • It always finishes (terminates) after a finite number of steps.
    • It always produces a correct answer.
  • In principle, it can be done by a human without any aids except writing materials.
  • Its instructions need only to be followed rigorously to succeed. In other words, it requires no ingenuity to succeed.[3]


(This pseudocode could be implemented into Matlab or similar)


a prime number: m
p = false (we have not found the prime number yet)


the next prime number: n, which we start counting at m.
initial condition: n = m.


n = m
p = false

while (p = false) %code will iterate while the state of p is false

{ k = m %  designate the initial value of the counting index as the given prime number
n = n + 1 % increase the value of n by 1

while (k – 1 >= 0) %loop will end once all factors of n have been evaluated


if (k – 1 = 0)
% if all possible factors of n have been explored and no factors of n have been found

{return num(n) is a prime number
p = true}

else {
% if possible factors of n have not been been explored

if {(n mod k) = 0

return num (k) is a factor of n
f = true
factors = [k, n / k)]
k = 1 %end the loop because a factor has been found}

else {
%if we have not yet reached a factor of n, then we decrease k by 1, thus k will diminish all the way to 1 until the first if() condition is true when n is prime

k = k -1}




Thus we have expressed an effective procedure which will generate the (n+1)st prime number from the nth prime number. By the nature of computations on the set of natural numbers, the truth value of future prime numbers depends on pre-existing primes in a manner which can be deduced using an effective procedure.

In physical reality, the number of statements which are truly true (not based on some previous true statement) is very low. These  fundamental truths are the axioms of the Fourfold Action Model. The axioms are of causality, fourfold action (4 action potentials) and fourfold waveform (only neutrons, protons, electrons and photons exist). Since all true statements can be derived from these core truths, no true statements exist which are not derivative of these prime truths. Thus all systems bound by causality are homeomorphic (a continuous bijection exists between the sets) to linear subspaces [of spacetime events, or more generally: actions] closed under addition subject to fourfold actions & fourfold waveforms, not arbitrary collections of finite sized sets.

The Alien Dictatorship

A Bit of My History

When I was younger, I had complex visualisations of magnanimous proportions. I saw a distant object made of light populated with an advanced civilisation of priests and warriors. All the entities living in this light world worked together to hold together its structure: all together they projected it. In my fantasies, they came to Earth to bring me into their light-ship and teach me their mystic Knowledge. When they arrived, they turned on a machine that made time stand still. They could then separate me from the rest of humanity in the time vortex and impart their sacred teachings.

de wey jesus meme
pic not related

They had many things to teach me, most importantly was that we were under attack from a nefarious alien spaceship. As a protective measure, the entities from the light ship selected the genetics of certain humans to be born with special abilities, imbuing their DNA with rich light. But the evil aliens managed to poison the DNA, which diminished the capacity of most of the gifted children. The light-ship beings explained this to me and tasked me with the duty of defending the planet from this threat.

As I grew older, I assumed my visions were whimsical childish flights of fancy. However, in recent times, I am not so sure anymore. I am starting to think that this information was metaphorical wisdom conveyed to me by light-beings (in a manner a child would understand). I think the warning about genetics getting poisoned is about vaccines. These have a far deeper impact on well-being than most people realise. Moreover, we are literally under attack by a foreign people who seek to poison our bodies and our minds.

benshapiro evil
cue Darth Vader music

On the Existence of Light Realm Conscious Entities

Many people wish to deny the existence of conscious entities in the light realm. People sometimes even go so far as to deny that the mind is a quantum computer (which is of the light realm), in spite of all the evidence supporting this hypothesis! However, all advanced aspirants accept that  consciousness endures between lives. This is because as one progresses spiritually, one’s consciousness becomes stilled to the point that memories of previous lives are not lost upon rebirth.

The conscious entities of the light realm are simply the consciousness of people who are not alive at this time. Just as you can gradually learn to hold your breath longer and longer, so too can ascended masters learn to avoid taking birth again, but still retain consciousness between lives. Since this consciousness is now outside of the domain of the living, it does not change with time like the human consciousness does. Remember, the living body is what changes the consciousness. It changes the consciousness by sending stimuli (received from the environment) to the mind which is then modified by these stimuli. These modifications range from thoughts and feelings to perceptions and imagination. [*The mind also modifies the body, but it does so over a different time scale. As this subject is vastly more complicated, we will leave that discussion to another day]. Non-living conscious entities do not undergo such modulations, unless of course, they manage to interact with living consciousness. They can then piggyback on living consciousness, modifying it to emulate their state. This is an uncommon occurrence and so these conscious entities are largely unchanging.

The Grand Canonical Transform

I was recently reminded of the light-ship machine that made time stand still. As narratives converge, people are noticing their sense of time is different. Often, a month feels like a year, simply because so much has happened. Rather than focusing on the passage of linear time, people are focusing on structures (religion, ideology, science, society, politics) and narratives (summaries (“stories”) of longer time intervals): both of which are of the Information domain. There are two aspects of reality: the domain of {space,time} (manifest) and {Entropy, Information} (unmanifest). In my science religion, we learn the mathematical formulas to transform between these respective domains.

jupiter high res image
Mathematics is the language of nature

We are physical beings experiencing a consciousness which is quantum mechanical. This conscious waveform can experience all facets of reality, but the unmanifest attributes are significantly harder to perceive. Everyone can perceive space and time, and some people (special people) can perceive Entropy and Information. Entropy is the degree of disorder in a system and information is the consolidation of individual data (stimuli) into well-defined archetypal structures (read more here) such as words, ideas, visualisations, concepts, algorithms and knowledge. My science religion teaches people to maximise the amount of information they can extract from mental processes by employing the mental process of lowest Entropy: ordered thinking means the mind can expend less energy thinking (because it is not devoting any energy to simulating falsity or redundancy).

So-called “memes” are images which convey meaning. The meaning itself can be intellectual, emotional or both. By promoting ideas (of the Information realm), the consciousness is redirected from the realm of time and space (what is called “mundane”). The loss of ignorance promulgated by Internet culture has brought people from a mundane and materialistic life to a light realm of ideas and knowledge. Time appears to slow down as we all bring our focus together to the task of forging a better tomorrow.


blue sun outer space

Thank you


Gödel’s Incompleteness Theorem

Update March 20th: Reddit is Scum

Update March 28: Another Counterproof of the Incompleteness Theorem

*Proof follows below

banned from reddit bad math

I just wanted to let the Reddit people know that ridiculing my math theories as a prop to project the fantasy that you’re oh so smart has failed. Your comment thread is pathetic, you haven’t made a single counterargument to what was admittedly my laziest and most incomplete proof ever. Why would I disprove it 100% & risk someone plagiarising my work when I can just say anything and people will believe me even if it isn’t true (which it is) because you’ve ruined your own reputation.

At some point you have to give the insane conspiracy theory that there exists a “right wing conspiracy” against the actions of the radical left. At some point you have to accept that you are the one who is deceived: not me. The amount of mental gymnastics you must have to perform to hold a cogent worldview would make Cirque du Soleil blush.

get a life

Gödel’s Incompleteness Theorem

Today, it is actually a “plus” to be widely hated (by the right people). This shouldn’t come as a surprise, it has always been like that in circles of true influence (regal infamy). So while it is unwise to attempt to win a debate by the sole means of ad hominem, it is naive to fail to consider the circumstances surrounding events as well as the type of person putting forth an argument.

kurt godel

To be honest, Gödel seems like a shitbag.

One might ask if associating with Einstein, the greatest villain of modern science, is a sufficient reason to discard all of someone’s opinions? You might be surprised to learn that I don’t have strong opinions on who associates with whom. A person’s actions determine their value more so than their associates. Jesus (whether he existed or not) himself associated with all sorts, suggesting this is culturally accepted as a virtue.

It would be naive to deny any impact whatsoever of Gödel’s environment on his attitudes, however.

I am of the opinion that his first incompleteness theorem is false because of the sheer number of times I hear it quoted to me in the interest of justifying some pretty absurd ideas. For instance, Dr. Jordan Peterson used the Incompleteness theorem when asserting that “God” is a prerequisite for truth: pretty irresponsible. This is untrue, a well-defined philosophical system is what allows for truth to be known. “God” as prime truth seems illogical. God cannot be narrowly defined since people’s individual definitions of “God” vary so much

godel quote language
If you cavalierly quote someone’s obscure theory to substantiate your position, you look like a dumbass when your statements contradict their ideology!!

Whether legitimate or not, Gödel’s Incompleteness theorem smells like a proof that “some ideas aren’t allowed”. But hey, I could be wrong. I could just be a crazy conspiracy theorist delusional person.

godel von neumann quote
Oh, well, if Von Neumann endorses him, well, I just don’t know!

Let’s have a look at this dreadful theory people keep preaching to me:

First Incompleteness Theorem: “Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.”

A consequence is that we ought to be unable to accomplish a unified field theory. If you believe in Gödel, you can never believe a unified field theory exists. Yet, tradition has always taught that a unified field theory DOES exist (the “self”).

Counter Proof of Gödel’s First Incompleteness Theorem

We define F to be the set of all potential computations/measurements (actions) in the Universe. Let us define the “sentences” as series of actions. Since our action model behaves as operators (sorry but you have to understand rudimentary linear algebra for that one) & operators are linear maps, an elementary arithmetic exists. This arithmetic is the matrix multiplication/addition intrinsic to linear maps. This is  used to construct “sentencesf”.

True sentences the satisfy the criterion of computability (within the Measurement Limit) and false sentences are incomputable (in excess of the Measurement Limit). This means that all actions are either proved (computable) or disproved (incomputable). The Measurement Limit cleanly delineates the criterion of trueness for all actions. That is: measurements exceeding what is permissible by the Measurement Limit are false.

In our example, we consider only the potential for computation, so we never end up having to carry out any actual measurements.

Measurements reduce quantum waveforms, therefore there is a limit to the new information successive measurements can derive. Thus both the elementary arithmetic exists (the Measurement Limit pulveriser) and actions can always be either proved (computable) or disproved (not computable). Thus there are NO statements which can neither be proved nor disproved. This would seem to contradict Gödel.


I’m probs right tho. Statistically speaking.

Where is Gödel’s Flaw?

(source) The Flaw of Gödel is not technical but rather: structural. There is no such thing as “ω-consistent“. This is because there is no such thing as “intuitively contradictory”. You will eventually run out of new statements that you can make in an “infinite” system, thus you will not necessarily be able to construct the element of the proof required to make the necessary contradiction (see the 3rd step of the sketch proof).

This is because, at its core, the infinite number line (“Gödel’s numbers”) exists nowhere. Even the Universe itself has a “size” (largest interstellar distance) beyond which it is undefined. Measurements only exist because we can make them. Measurements all exist within the Measurement Limit. This can be shown to exist, be self consistent and make all predictions. An Entropic-Anthropic Principle!

Let us also consider that Gödel was a nervous insecure wreck. We are basically dealing with a dual competing hypothesis situation:

  1. Einstein is a really amazing smart guy who hung out with his equally enlightened yet ironically perpetually ill Gödel and they uncovered the secrets of the Universe.
  2. Einstein’s goals were political first and mathematical second. Einstein’s “antifascist” alliance combined with Gödel’s persecution complex to create a scientific philosophy that made everyone completely turned off from natural science because they presented it as a horrible pot of jibberish nonsense.

“This is Woo”

Some people say the quantum mind hypothesis is ‘wrong’ because it is ‘woo’. This is false. The truth is that there are many nonsensical theories out there. These are put forth to paralyse the minds of devotees. These psyops only exist because there is something to cover up! Those seeking to defame the Knowledge do so out of allegiance to the status quo. Luckily for us, the Periodic Table has made this shilling ineffective / counterproductive.

On Allegations of “Unprovability”

If you wish to put forward the argument that my statements are unprovable, you must accept that these allegations would apply equally (at least!) to Gödel’s gobledigook. Then it becomes a 3 state hypothesis: 1. Gödel & his buddy Einstein are right, somehow. 2. I’m right and I am the cool one 3. Someone else, who isn’t 1. or 2. is more correct.

I warn that a counterargument will most likely also fall into the domain of: ‘unprovable’!

I think mine is better.


Thank you.


The Size of the Mind

The physical Body exists as a three dimensional object in the Universe of the 3+1 Measurement Limit. The Mind is created by the electric field generated by the heartbeat of the Body. Since all time varying electric fields create magnetic fields, the mind is thus electromagnetic.


Electromagnetic fields are 3 dimensional oscillators containing many different frequencies. We next determine how many distinct frequencies are possible. The answer is found in the structure of the Universe itself. We proffer that electromagnetic frequencies can exist over the range of all possible wavelengths: all possible sizes. So we  estimate the range of possible wavelengths from smallest to largest. We estimate the smallest size to be the protonic radius and the maximum size to be the largest inter-stellar distance. This gives a total of 42 orders of linear magnitude. In other words, if I take the shortest distance in the Universe, I have to multiply it by ten 42 times before I reach the full size of the Universe (estimated as the largest inter-stellar distance).


Since electromagnetic fields still exist in 3 dimensional space, they are 3-dimensional waveforms. Thus we have a total of 14 different 3-dimensional orders of magnitude in which waveforms can exist (42/3 = 14). Of course, it is not required that the human mind must exist over the span of all possible frequencies. Recent research however has shown the mind is capable of creating 11 dimensional objects. Read more here.


Scientific Philosophy

Philosophy – General

No matter what topic you want to discuss, there will always be a structure (hierarchy of values) within which this discussion takes place. The truth value of conclusions drawn are thus not independent of said structure.

Generally, observations are first made through the subject’s fundamental ideology, then interpreted through their values hierarchy. This is a parse metric which sorts the information in a manner which eventually leads the subject to be able to draw a conclusions about the original statement, such as whether it is “true” or “false”.

truth value ideology hierarchy.jpg

When discussing particular subjects, we often run into problems because people have different values hierarchies. Rather than obtaining a conclusion, most debates turn into a stalemate. This is why it is very important to be clear both on the definitions of words and values hierarchy. Let’s explore each step of the process in greater detail.


These are sensory impressions delivered by means of the body’s electro-chemical potentials which form the bridge between the body (massive) and soul (a light-like quantum computer).

Fundamental Ideologies

Observations are first interpreted/simplified/compressed by the fundamental ideology. Given the large amount of sensory data, our mind must condense the information it is first supplied with so it can make sense of what it is experiencing.

While not everyone has the same fundamental ideology, most will have a fundamental ideology connected to their primary sense organs sight/forms and hearing/sounds.

Let’s clarify this abstract notion by way of example.


Language: {vowel, consonant, tone, click}
ex: English = {(a,e,i,o,u,y), (b,c,d,f,g,h,j,k,l,m,n,p,q,r,s,t,v,w,x,y,z), ∅*, ∅}

*  denotes the null or empty set


Visualisations: {0,1,2,3…} (orders of complexity)
Linear: {point, line, plane, hyperplane…}
Geometric: {, point, line, triangle, square, pentagon…}
Quantum Mechanic: {point, sphere, torus, bisected ellipsoid / toroidal spiral, hypersphere…}

twirling black sphere
If it makes you feel any better, 99% of mainstream scientists don’t understand this stuff either.

While we could argue about which system was optimal as regards to parametrising a particular set (i.e.: the linear system is optimised for physical computers, the geometric system for physical buildings, the QM system for the consciousness…), it’s clear that we cannot associate a Truth value to any of these ideologies: they are unfalsifiable. (for example: English is “true”, as in: it exists. but then again so does French). Ideologies cannot usually be falsified, rather optimised.

We seek to optimise our fundamental ideologies in my religion. We achieve this by studying them and debating which is best.

Values Hierarchy

The Values Hierarchy is the structure demarcating what values are most important. Some examples of values include: religious scripture, truth, pandering (wanting to make everyone happy), identity, history.

quantum mechanic periodic table
My Primary Value is Truth

To summarise, observations are the measurements made by the mind/body. These are first interpreted by the fundamental ideology before being sorted by the values hierarchy. The end result of this sort process is the entity deciding a truth value for the original statement.

truth value ideology hierarchy

The complexity of the subjective experience highlights why it is very important to be clear both about the definitions of individual words (Sound Vectors) and ideologies (individual values hierarchy).

Types of Assertions

Falsifiable, Predictive: Limited scientific theory. These theories are useful for understanding causality in a partial manner. Once they are falsified, they must be abandoned (something the communists seem to have a hard time understanding).

Falsifiable, Unpredictive: These are false descriptions, such as: “you’re ugly”. Pretty much useless.

Unfalsifiable, Unpredictive: Trite theories, such as: “There is an invisible unicorn in the room”.

Unfalsifiable, Predictive: Complete scientific theory. These theories are useful for understanding the causality (the totality of all cause-effect relationships) of a particular system in a complete manner. For example, the Measurement Limit.

We generally run into problems when we use FP instead of UP theories. There can exist UP theories in psychology & philosophy (these subjects overlap in the domain of the Quantum Mind), but most people end up arguing in circles ad infinitum over minutia.

Optimising Ideology with Quantum Geometry

We cannot escape the need to parametrise all systems we are intent on describing. Because topographies vary, we must first and foremost parametrise a system within its particular configuration space (3+1 measurements per order of magnitude). Luckily, most systems don’t need to be parametrised exactly (with full formulaic representation) before we can make viable predictions about them. In any case, we begin by subdividing a system into what information is knowable and what is unknowable.

parse metric optimisation

Next, iterative/recursive optimisation is employed. Ideally, we want this process to be convergent, that is: the optimised version includes the original parse metric.

In order for a parse metric to be complete it must make all predictions within a particular system. Thus our optimisation process will involve either one or both of:

  1. Shrinking the domain of applicability
  2. Increasing the complexity of the parse metric

Applied Science Philosophy

It is not realistic to expect to find simple (low cardinality) parse metrics to expound causality of subjective phenomena. This is why people fight so much about the causality of race and culture: these parse metrics are often improperly defined / delineated and can’t help but create controversies.

Criticising an unfalsifiable parse metric without a viable alternative hypothesis is counter-productive. Presuming that an unfalsifiable, predictive parse metric is sufficient to transcend the causality of complex systems is naive. Only by studying the set of unfalsifiable parse metrics can we gain the intuition required to judge which parse metric is optimal for a given situation.

Examples of Unfalsifiable Predictive Parse Metrics

  1. The Fourfold Action model: {Gravity, Uncertainty, Electricity, Entropy}.
  2. Alpha / Beta (as human archetypes).
  3. The Logistic Equation (of which r-K selection theory is an instance).