Tag Archives: knowledge

What Math Really Is

There is an almost ubiquitous misconception about what mathematics really is, and it’s a misconception that genuinely beckons a correction.  I would take a guess that if the average person was asked “What is mathematics?”, they would respond with something along the lines of “well, it’s a bunch of rules that help you find certain numbers”.  While this may have been a correct answer long ago, it is far from correct today.

Fortunately, what it actually is can be summarized very succinctly.  Mathematics is simply the process of making assumptions and proving what follows.  Hence, we all do math on a daily basis—either when talking to one another, or when thinking to oneself:  “given what I know, I think that…”.  This is math.

This is also why math courses through calculus are terrible—as they are absurdly misleading.  Current curriculum is libelous to the discipline of mathematics and its participants, and action needs to be taken to address this.

At this point, high school language arts and composition courses may teach more math than actual math courses.  Fundamental to mathematics is logic and its application in the context of sets.  Logic was essentially nonexistent when I was in high school.  Yet math and language arts classes implicitly assume that students have a solid understanding of it when they are asked to make arguments.  Granted, as we are logical entities cognitively, we are trivially masters of logic.  But in terms of conveying it communicably, improved training is necessary.  I feel the overlooking of this necessity is a grave miscalculation that has hindered scientific thinking (an ability from which every citizen of the world can drastically benefit) for far longer than it should have.  This needs to change.

The Relentless Ingroup Bias

With the termination of the military’s “Don’t Ask Don’t Tell” policy as of yesterday, I am reminded of the fundamental concept at play that delayed this resolution for so long.  This fundamental concept is also at the core of many other issues: gender discrimination, racial discrimination, religious discrimination,…, X discrimination.  This fundamental concept is at the heart of contemporary political partisan bickering, the wealth gap, and wars altogether.  This fundamental concept is the scarcity of resources.  When the set of resources available to a population in insufficient for that population, members of the population will inevitably compete for them.  The ingroup bias has become the catalyst for nonuniform distribution of resources.

The ingroup bias follows from the ingroup instigation.  The heuristic for obtaining resources is cooperative gameplay.  We primitively wish to form alliances with people for the sole purpose of overpowering others (or groups of others) in order to ensure the acquisition of resources.  This is the biologically intrinsic (and evolutionally reinforced) tendency which I call the ingroup instigation.  Given the objective of the ingroup (to work together), the ingroup bias (the tendency to favor individuals in the ingroup and disfavor those in the outgroup) follows.  If we assume this to be true, then the function determining which individuals will group is only governed by what best allows them to overpower other groups or individuals.  By default this starts with proximity and special cases of it such as family (note also how fundamental physical forces operate).  Then, once some groups gather many resources, it may be beneficial for them to team as well.  This can be seen in political alliances, corporate mergers, residential segregation with respect to socioeconomic status (i.e. poor towns or rich towns),…,collisions of galaxies, etc.

It then follows that any minority or group of individuals who have less power become targets of the majority, simply because it is easy to take their resources.  Whether it be a minority based on gender, race, sexual orientation, or religion, the characteristic of a group being a minority–not being in the ingroup of those with the quantitative or qualitative power–becomes sufficient for hindering their progress and in turn targeting their resources.

Hopefully one day we will realize that “potential knowledge” is a good with no scarcity that can in turn be distributed to everyone without limits.  The amount of knowledge in a system will always be finite (but not constant), yet it will also always be an upper bound on usable resources in society (i.e. the amount of consumable resources in the system is dependent upon the amount of knowledge [on how to create such resources from raw resources] in the system at that time).

Update of Language Definition

Note that I have removed the factorization requirement from the definition of a language in Fundamental Knowledge Part 1;  so we will just have \mathcal{L}_{F,T,W}=F[T[W]].  This will remove some triviality in examples of fuzzy logic systems in the upcoming post.  The original motivation behind the factorization was that traditionally compound terms are considered formulas, but terms themselves are not considered formulas.  I don’t really see why we can’t let terms be formulas;  let us assume “substitutions” have already been made.

I have also removed the requirement that \varphi(\phi)=\varphi(\psi) for all \phi,\psi in a theory X where \varphi is a logic system.  Instead I have defined a logic system that satisfies this condition as a normal logic system.

Fundamental Knowledge-Part 2: Models

The next task is to absorb the traditional area of mathematical logic.  One key missing ingredient is a model.  Let us recall the traditional setup (taken from [1]).

Definition 2.1.  Let S be a set (of symbols).  An S-structure is a pair \mathfrak{A}=(A,\mathfrak{a}) where A is a nonempty set, called a universe, and \mathfrak{a} is a map sending symbols to elements, functions, and relations of A.  An assignment of an S-structure (A,\mathfrak{a}) is a map \beta:S\to A.  An S-interpretation is a pair \mathfrak{I}=(\mathfrak{A},\beta) where \mathfrak{A} is an S-structure and \beta is an assignment in \mathfrak{A}.

For shorthand notation, the convention (with some of my modifications) is to write:  c^\mathfrak{A}=\beta(c), (f(t_1,...,t_n))^\mathfrak{A}=\mathfrak{a}(f)(\beta(t_1),...,\beta(t_n)), and (xRy)^\mathfrak{A}=\beta(x)\mathfrak{a}(R)\beta(y).  These are the terms.  Formulas are then built from the terms using traditional (although this can be generalized) logical connectives.

The notion of a model is then defined via induction on formulas.

Definition 2.2.  Let \mathfrak{I}=(\mathfrak{A},\beta) be an S-interpretation.  We say that \mathfrak{I} satisfies a formula \phi (or is a model of \phi), denoted \mathfrak{I}\vDash\phi, if \phi^\mathfrak{A} holds, where \phi^\mathfrak{A} is defined via its components and \beta and \mathfrak{a} where necessary.

Formal languages in convention are built up from the formulas mentioned above, which are nothing more than special cases of Alt Definition 1.3.  A model for a language is hence nothing more than an A-interpretation into a structure, where A is an alphabet (provided it is equipped with a logic system).  This is precisely what I have constructed in Part 1;  the symbols of W\subset A^* are mapped to the universe \mathcal{L}_{F,T,W}.  The next thing to establish is that every model is a language model.  This is trivial since a model by definition satisfies a set of formulas as well as compounds of them (i.e. it must satisfy a language).  Hence we have no need to trouble ourselves with interpretations and may simply stick to the algebra of Part 1.

While we have absorbed model theory, there are a few more critical topics to absorb from mathematical logic. We return to the language of Part 1 (no pun).  Let X be a theory of \mathcal{L}_{F,T,W} and \varphi:F[X]\to V be a binary logic system.  A formula \phi\in\mathcal{L}_{F,T,W} is derivable in X if it is a proposition (i.e. is in F[X]).  We may write X\vdash\phi.  This definition is in complete agreement with the traditional definition (namely, there being a derivation, or finite number of steps, that begin with axioms and use inference rules);  it is nothing more than saying it is in F[X].  Similarly \phi\in\mathcal{L}_{F,T,W} is valid if \varnothing\vdash\phi, or equivalently, it is derivable in any theory.  In our setup this would imply \phi\in F[\varnothing]=\varnothing.  Hence no formula is valid.

Let F have a unary operation \lnot and \varphi:F[X]\to V be a logic system on a theory X.

If we assume \lnot to be idempotent (\lnot\lnot\phi=\phi), then since \varphi is a homomorphism, we have \varphi(\phi)=\varphi(\lnot\lnot\phi)=\lnot\lnot\varphi(\phi).  That is, the corresponding unary operation in V must also be idempotent on ran(\varphi).

Definition 2.3.  A unary operation \lnot (not necessarily idempotent) is consistent in \varphi if for all \phi\in F[X], \varphi(\phi)\neq\varphi(\lnot\phi).

If we assume \lnot is consistent in \varphi and that \varphi is a binary logic system, then the corresponding \lnot in V is idempotent since

\varphi(\phi)=0\Rightarrow\varphi(\lnot\phi)=1\Rightarrow\varphi(\lnot\lnot\phi)=\lnot\lnot\varphi(\phi)=0.

Again, proofs in a binary system are independent of the choice of valence.   If we assume consistency and idempotency, then we have a nonidentity negation which is idempotent on the range.  The case for assuming binary system and idempotency yields either a trivial mapping of propositions (all to 0 or all to 1), or that \lnot is consistent and idempotent on V.  And lastly if we assume all three (idempotency and consistency of \lnot together on a binary system), we obtain a surjective assignment with idempotent negation in V.

Let \varphi:F[X]\to V be a binary logic system where V is a boolean algebra.  Then the completeness and compactness theorems are trivial.  Recall these statements:

Completeness Theorem.  For all formulas \phi and models \mathfrak{I},

\mathfrak{I}\vDash\phi\Rightarrow X\vdash\phi

where \mathfrak{I}\vDash X.

Compactness Theorem.  For all formulas \phi and models \mathfrak{I},

X\vdash\phi\Rightarrow\mathfrak{I}\vDash\phi

where \mathfrak{I}\vDash X.

Traditionally these apply to, what we would call, a binary logic system \varphi:F[X]\to V where V is a boolean algebra (hence F has a consistent, idempotent negation) under traditional operations, and in particular this fixes the operational/relational structures of F, T, and W , but X is arbitrary.  In this setup, all “formulas” (or what we would hence call propositions since they are generated by a theory) are trivially satisfiable since they have a language model.  Hence Compactness is true.  Moreover since they are propositions in a binary logic system, they are in some F[X] for a theory X and are hence derivable; so we have Completeness.

Lastly we wish to address Godel’s Second Incompleteness Theorem;  recall its statement:

Godel’s Second Incompleteness Theorem.  A theory contains a statement of its own consistency if and only if it is inconsistent.

We have only defined what it means for a unary operation in a logical system to be consistent.  Hence we can say that a binary logic system with a unary operation is consistent if its unary operation is consistent.  But all of these traditional theorems of mathematical logic are assuming a binary logic system where V is a boolean algebra , \lnot is idempotent, and the map \varphi:F[X]\to V is surjective.  Hence \lnot is consistent (from above discussion), and the consequence in the theorem is false.

The weakest possible violation of the antecedent of Godel’s theorem is to use a structure to create itself (i.e. that it is self-swallowing), which makes no sense, let alone using it to create a larger structure within which is a statement about the initial structure.  That a binary logic system with unary operation could contain a statement of its own consistency is itself a contradiction, since the theory itself, together with the statement \phi, are in a metalanguage.  It is like saying that one need only the English language to describe the algebraic structure of the English language.  As we previously said at the end of Part 1, one can get arbitrarily close to doing this–using English to construct some degenerate form of English, but you can never have multiple instances of a single language in a language loop.  Another example would be having the class of all sets, then attempting to prove, using only the sets and operations of them, that there is a class containing them.

Hence the antecedent is also false.  So both implications are true.

[1]  Ebbinghaus, H.-D., J. Flum, and W. Thomas.  Mathematical Logic.  Second Edition.  Undergraduate Texts in Mathematics.  New York: Springer-Verlag.  1994.

Fundamental Knowledge-Part 1: The Language Loop

We must start with a language.  A language can be defined in two ways.  First let us begin with the axioms of pairing, union, and powerset and schema of separation.  This gives us a cartesian product of sets, and hence functions.

Definition 1.1.  An nary operation on a set X is a map O:X^n\to X.  A structure is a set X together with an n-ary operation.  The signature of a structure X is the sequence (n_1,...,n_k,...) where n_k is the number of k-ary operations.

Definition 1.2.  Let X and Y be structures with the same signature such that each k-ary operation of X is assigned to a k-ary operation of Y (i.e. f(O_i)=O^i where O_i is the ith k-ary operation of X).  A homomorphism between structures X and Y is a map \varphi:X\to Y such that

\displaystyle \varphi(O_i(x_1,...,x_n))=O^i(\varphi(x_1),...,\varphi(x_n)).

Note that a nullary operation on X is a map O:\varnothing\to X.  That is, it is simply an element of X.  Now let A be a set which we will call an alphabet, and its elements will be called letters.  A monoid X has a nullary operation, 1\in X called a space, and a binary operation, which will simply be denoted by concatenation.  We define the free monoid on A as the monoid A^* consisting of all strings of elements in A.  We now have two definitions of a language, of which the first is traditional and the second is mine:

Definition 1.3.  A language is a subset of A^*.

Alt Definition 1.3.  Let W\subset A^*, T be a relational structure (a set together with an n-ary relation), and F be a structure.  The language \mathcal{L}_{F,T,W} is defined as F[T[W]] where X[Y] is the free X-structure on Y.  In particular elements of W are called words, elements of T[W] are called terms, and elements of \mathcal{L}_{F,T,W} are called formulas.

Definition 1.4.  A theory of \mathcal{L}_{F,T,W} is a subset X\subset\mathcal{L}_{F,T,W}.  Elements of a theory are called axioms.  Elements of F[X] are called propositions.  A theory X of \mathcal{L}_{F,T,W} is called a reduced theory if for all \phi,\psi\in X, \psi\neq O(\phi,x_1,...,x_{n-1}) for all n-ary operations of F and all placements of \phi in evaluation of the operation.  (That is, the theory is reduced if no axiom is in the orbit of another).

For example, the theory \mathcal{L}_{F,T,W} is called the trivial theory.  The theory \varnothing is called the empty (or agnostic) theory.

Definition 1.5.  An nary logic system on a theory X is a homomorphism \varphi:F[X]\to V where F and V have the same signature and V has cardinality n.  We may also say the logic system is normal if \varphi(\phi)=\varphi(\psi) for all \phi,\psi\in X.

In traditional logic V is a two element boolean algebra.  Traditional logic also has a special kind of function on its language.

Definition 1.6.  A quantifier on \mathcal{L}_{F,T,W} is a function \exists:T[W]\times\mathcal{L}_{F,T,W}\to\mathcal{L}_{F,T,W}.  We may write:

\exists(x\in X,\phi)=(\exists x\in X)\phi.

In particular it is a pseudo operation, and gives the language a pseudo structure.  This is similar to modules, where in this case a product of a term and a formula are sent to a formula.

Hence our initial assumption of four axioms (as well as the ability to understand the English language), have in turn given us the ability to create a notion of a language of which a degenerate English can be construed as a special case.  This is certainly circular in some sense, but in foundations we must appeal to some cyclic process.  One subtlety worth noting is that the secondary language created will always be “strictly bounded above” by the initial language;  they aren’t truly equivalent.  (In fact this last statement is similar to the antecedent of Godel’s Second Incompleteness theorem).