https://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&feed=atom&action=historyExpected Float Entropy Minimisation - Revision history2024-03-29T13:49:51ZRevision history for this page on the wikiMediaWiki 1.34.1https://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=438&oldid=prevJonathan Mason at 17:56, 10 December 20212021-12-10T17:56:04Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 17:56, 10 December 2021</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l7" >Line 7:</td>
<td colspan="2" class="diff-lineno">Line 7:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In 2021 the theory was shown to easily extend to include the concept and mathematical formulation of Model Unity<ref name=Mason2021>Mason, J. W. (2021), Model Unity and the Unity of Consciousness: Developments in Expected Float Entropy Minimisation. Entropy, 23, 11. doi:10.3390/e23111444</ref>, which is closely related to the unity (and disunity) of consciousness and provides a different notion of integration to that of IIT. The intention behind Model Unity is to answer questions such as, why different individuals do not have shared perception, why individual visual perception is unified and why visual perception is phenomenally very different to auditory perception, for example. </div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In 2021 the theory was shown to easily extend to include the concept and mathematical formulation of Model Unity<ref name=Mason2021>Mason, J. W. (2021), Model Unity and the Unity of Consciousness: Developments in Expected Float Entropy Minimisation. Entropy, 23, 11. doi:10.3390/e23111444</ref>, which is closely related to the unity (and disunity) of consciousness and provides a different notion of integration to that of IIT. The intention behind Model Unity is to answer questions such as, why different individuals do not have shared perception, why individual visual perception is unified and why visual perception is phenomenally very different to auditory perception, for example. </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Due to properties such as learning, the brain is very biased toward certain system states and therefore determines typical system states and, in theory, a probability distribution over the set of all system states. This opens up the possibility of applying [https://en.wikipedia.org/wiki/Information_theory information theory] type approaches and EFE is a form of expected conditional entropy where the condition involves relationship parameters. EFE is a measure of the expected amount of information required to specify the state of a system (such as an artificial or [https://en.wikipedia.org/wiki/Neural_circuit biological neural network]) beyond what is already known about the system from the relationship parameters. For certain non-uniformly random systems, particular choices of the relationship parameters are isolated from other choices in the sense that they give much lower Expected Float Entropy values and, therefore, the system defines relationships. According to the theory, in the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience. The principle article (Quasi-Conscious Multivariate Systems<ref name=Mason2016>Mason, J. W. (2016), Quasi-conscious multivariate systems. Complexity, 21: 125-147. doi:10.1002/cplx.21720</ref>) on this mathematical theory was published in 2015 and was followed by the article (From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation<ref name=Mason2019>Mason, J. W. (2019), From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy, 21, 60. doi:10.3390/e21010060</ref>) in 2019. EFE first appeared in a publication in 2012<ref name=Mason2012>Mason, J. W. (2013), Consciousness and the structuring property of typical data. Complexity, 18: 28-37. doi:10.1002/cplx.21431</ref>. </div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Due to properties such as learning, the brain is very biased toward certain system states and therefore determines typical system states and, in theory, a probability distribution over the set of all system states. This opens up the possibility of applying [https://en.wikipedia.org/wiki/Information_theory information theory] type approaches and EFE is a form of expected conditional entropy where the condition involves relationship parameters. EFE is a measure of the expected amount of information required to specify the state of a system (such as an artificial or [https://en.wikipedia.org/wiki/Neural_circuit biological neural network]) beyond what is already known about the system from the relationship parameters. For certain non-uniformly random systems, particular choices of the relationship parameters are isolated from other choices in the sense that they give much lower Expected Float Entropy values and, therefore, the system defines relationships. According to the theory, in the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience. The principle article (Quasi-Conscious Multivariate Systems<ref name=Mason2016>Mason, J. W. (2016), Quasi-conscious multivariate systems. Complexity, 21: 125-147. doi:10.1002/cplx.21720</ref>) on this mathematical theory was published in 2015 and was followed by the article (From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation<ref name=Mason2019>Mason, J. W. (2019), From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy, 21, 60. doi:10.3390/e21010060</ref>) in 2019<ins class="diffchange diffchange-inline">. The extension to Model Unity was introduced in the article (Model Unity and the Unity of Consciousness: Developments in Expected Float Entropy Minimisation<ref name="Mason2021"/>) published in 2021</ins>. EFE first appeared in a publication in 2012<ref name=Mason2012>Mason, J. W. (2013), Consciousness and the structuring property of typical data. Complexity, 18: 28-37. doi:10.1002/cplx.21431</ref>. </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The nomenclature “Float Entropy” comes from the notion of floating a choice of relationship parameters over a state of a system, similar to the idiom “to float an idea”. Optimisation methods are used in order to obtain the relationship parameters that minimise Expected Float Entropy. A process that performs this minimisation is itself a type of learning method.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The nomenclature “Float Entropy” comes from the notion of floating a choice of relationship parameters over a state of a system, similar to the idiom “to float an idea”. Optimisation methods are used in order to obtain the relationship parameters that minimise Expected Float Entropy. A process that performs this minimisation is itself a type of learning method.</div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l13" >Line 13:</td>
<td colspan="2" class="diff-lineno">Line 13:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Overview==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Overview==</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Relationships are ubiquitous among mathematical structures. In particular, weighted relations (also called weighted graphs and [https://en.wikipedia.org/wiki/Weighted_network weighted networks]) are very general mathematical objects and, in the finite case, are often handled as adjacency matrices. They are a generalisation of graphs and include all [https://en.wikipedia.org/wiki/Function_(mathematics) functions] since functions are a rather constrained type of [https://en.wikipedia.org/wiki/Graph_(discrete_mathematics) graph]. It is also the case that [[consciousness]] is awash with relationships; for example, red has a stronger relationship to orange than to green, relationships between points in our field of view give rise to geometry, some smells are similar whilst others are very different, and there’s an enormity of other relationships involving many senses such as between the sound of someone’s name, their visual appearance and the timbre of their voice. Expected Float Entropy includes weighted relations as parameters and, for certain non-uniformly random systems, certain choices of weighted relations are isolated from other choices in the sense that they give much lower Expected Float Entropy values. Therefore, systems such as the brain define relationships and, according to the theory, in the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Relationships are ubiquitous among mathematical structures. In particular, weighted relations (also called weighted graphs and [https://en.wikipedia.org/wiki/Weighted_network weighted networks]) are very general mathematical objects and, in the finite case, are often handled as adjacency matrices. They are a generalisation of graphs and include all [https://en.wikipedia.org/wiki/Function_(mathematics) functions] since functions are a rather constrained type of [https://en.wikipedia.org/wiki/Graph_(discrete_mathematics) graph]. It is also the case that [[consciousness]] is awash with relationships; for example, red has a stronger relationship to orange than to green, relationships between points in our field of view give rise to geometry, some smells are similar whilst others are very different, and there’s an enormity of other relationships involving many senses such as between the sound of someone’s name, their visual appearance and the timbre of their voice. Expected Float Entropy includes weighted relations as parameters and, for certain non-uniformly random systems, certain choices of weighted relations are isolated from other choices in the sense that they give much lower Expected Float Entropy values. Therefore, systems such as the brain define relationships and, according to the theory, in the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience.</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">The theory involves a hierarchy of relational models and at the lowest level the primary models involve pairs of weighted relations.</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Expected Float Entropy minimisation is very general in scope. For example, the theory has been successfully applied in the context to image processing<ref name="Mason2016" /> but also applies to waveform recovery from audio data<ref name="Mason2012" />.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Expected Float Entropy minimisation is very general in scope. For example, the theory has been successfully applied in the context to image processing<ref name="Mason2016" /> but also applies to waveform recovery from audio data<ref name="Mason2012" />.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<!-- diff cache key amcs_wiki-mcswiki_:diff::1.12:old-430:rev-438 -->
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=430&oldid=prevJonathan Mason at 17:51, 23 November 20212021-11-23T17:51:46Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 17:51, 23 November 2021</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l44" >Line 44:</td>
<td colspan="2" class="diff-lineno">Line 44:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For a given system, let <math>\widehat{S}</math> denote the set of all possible ways to view the system as a collection of subsystems. Let <math>\mathcal{X}\in\widehat{S}</math>, and define</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For a given system, let <math>\widehat{S}</math> denote the set of all possible ways to view the system as a collection of subsystems. Let <math>\mathcal{X}\in\widehat{S}</math>, and define</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>\mu(\mathcal{X},P):=\left(\sum_{X\in\mathcal{X}}efe(\mathfrak{R}_{X},\mathfrak{U}_{X},P_{X})\right)-efe(\mathfrak{R},\mathfrak{U},P),</math></div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>\mu(\mathcal{X},P):=\left(\sum_{X\in\mathcal{X}}efe(\mathfrak{R}_{X},\mathfrak{U}_{X},P_{X})\right)-efe(\mathfrak{R},\mathfrak{U},P),</math></div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>where each term is individually minimized with respect to the choice of primary models used and the last term is the minimum EFE for the whole system. Furthermore, define</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>where <ins class="diffchange diffchange-inline"><math>P_{X}</math> is the marginal probability distribution for the subsystem <math>X</math>, </ins>each term is individually minimized with respect to the choice of primary models used and the last term is the minimum EFE for the whole system. Furthermore, define</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>M(P):=\min_{\mathcal{X}\in\widehat{S}}\mu(\mathcal{X},P).</math></div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>M(P):=\min_{\mathcal{X}\in\widehat{S}}\mu(\mathcal{X},P).</math></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The definition of Model Unity is then as follows.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The definition of Model Unity is then as follows.</div></td></tr>
<!-- diff cache key amcs_wiki-mcswiki_:diff::1.12:old-429:rev-430 -->
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=429&oldid=prevJonathan Mason at 17:31, 23 November 20212021-11-23T17:31:31Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 17:31, 23 November 2021</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l43" >Line 43:</td>
<td colspan="2" class="diff-lineno">Line 43:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For a given system, let <math>\widehat{S}</math> denote the set of all possible ways to view the system as a collection of subsystems. Let <math>\mathcal{X}\in\widehat{S}</math>, and define</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For a given system, let <math>\widehat{S}</math> denote the set of all possible ways to view the system as a collection of subsystems. Let <math>\mathcal{X}\in\widehat{S}</math>, and define</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:<math>\mu(\mathcal{X},P):=\left(\sum_{X\in\mathcal{X}}<del class="diffchange diffchange-inline">\</del>efe(\mathfrak{R}_{X},\mathfrak{U}_{X},P_{X})\right)-<del class="diffchange diffchange-inline">\</del>efe(\mathfrak{R},\mathfrak{U},P),</math></div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:<math>\mu(\mathcal{X},P):=\left(\sum_{X\in\mathcal{X}}efe(\mathfrak{R}_{X},\mathfrak{U}_{X},P_{X})\right)-efe(\mathfrak{R},\mathfrak{U},P),</math></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>where each term is individually minimized with respect to the choice of primary models used and the last term is the minimum EFE for the whole system. Furthermore, define</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>where each term is individually minimized with respect to the choice of primary models used and the last term is the minimum EFE for the whole system. Furthermore, define</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>M(P):=\min_{\mathcal{X}\in\widehat{S}}\mu(\mathcal{X},P).</math></div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>M(P):=\min_{\mathcal{X}\in\widehat{S}}\mu(\mathcal{X},P).</math></div></td></tr>
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=428&oldid=prevJonathan Mason: The page has been updated to include some details about Model Unity.2021-11-23T17:26:50Z<p>The page has been updated to include some details about Model Unity.</p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 17:26, 23 November 2021</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l1" >Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>'''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the <del class="diffchange diffchange-inline">intuitive idea that consciousness may be some kind of minimum entropy interpretation of system states. It is formulated with </del>the <del class="diffchange diffchange-inline">aim of explaining (up to relationship isomorphism) how the brain defines the content </del>of consciousness <del class="diffchange diffchange-inline">at least with respect to all relationships and associations within subjective experience and the structural content comprised of such relationships. For example</del>, <del class="diffchange diffchange-inline">one might ask how </del>the <del class="diffchange diffchange-inline">brain defines </del>the <del class="diffchange diffchange-inline">perceived geometry of the field of view or the perceived relationships between different colours, or between different audible frequencies. At higher structural levels there are also perceived relationships between different objects and between objects and words for example</del>.</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>'''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the <ins class="diffchange diffchange-inline">following underlying postulate about </ins>the <ins class="diffchange diffchange-inline">nature </ins>of consciousness, <ins class="diffchange diffchange-inline">in which </ins>the <ins class="diffchange diffchange-inline">word ''interpretation'' means relational model when translated to </ins>the <ins class="diffchange diffchange-inline">mathematical domain</ins>.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Due to properties such as learning, the brain is very biased toward certain system states and therefore determines typical system states and, in theory, a probability distribution over the set of all system states. This opens up the possibility of applying [https://en.wikipedia.org/wiki/Information_theory information theory] type approaches and EFE <del class="diffchange diffchange-inline">has some similarities with </del>conditional <del class="diffchange diffchange-inline">Shannon </del>entropy <del class="diffchange diffchange-inline">except </del>the condition <del class="diffchange diffchange-inline">involved is comprised of </del>relationship parameters. EFE is a measure of the expected amount of information required to specify the state of a system (such as an artificial or [https://en.wikipedia.org/wiki/Neural_circuit biological neural network]) beyond what is already known about the system from the relationship parameters. For certain non-uniformly random systems, particular choices of the relationship parameters are isolated from other choices in the sense that they give much lower Expected Float Entropy values and, therefore, the system defines relationships. <del class="diffchange diffchange-inline">In </del>the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience. The principle article (Quasi-Conscious Multivariate Systems<ref name=Mason2016>Mason, J. W. (2016), Quasi-conscious multivariate systems. Complexity, 21: 125-147. doi:10.1002/cplx.21720</ref>) on this mathematical theory was published in 2015 and was followed by the article (From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation<ref name=Mason2019>Mason, J. W. (2019), From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy, 21, 60. doi:10.3390/e21010060</ref>) in 2019. EFE first appeared in a publication in 2012<ref name=Mason2012>Mason, J. W. (2013), Consciousness and the structuring property of typical data. Complexity, 18: 28-37. doi:10.1002/cplx.21431</ref>. </div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins class="diffchange diffchange-inline">(The fundamental postulate of EFE minimisation). ''If we suppose that consciousness is given by an interpretation or representation of system states then, notwithstanding the possibility that a system may need to satisfy a number of requirements to be conscious, among the infinitely many possible interpretations, consciousness is given by some form of minimum expected entropy interpretation of system states that yields an experience free of unnecessary discontinuities whilst exhibiting the intrinsic structural regularities of probable system states.''</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> </div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins class="diffchange diffchange-inline">The theory is formulated with the aim of explaining (up to relationship isomorphism) how the brain defines the content of consciousness at least with respect to all relationships and associations within subjective experience and the structural content comprised of such relationships. For example, one might ask how the brain defines the perceived geometry of the field of view or the perceived relationships between different colours, or between different audible frequencies. At higher structural levels there are also perceived relationships between different objects and between objects and words for example.</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> </div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins class="diffchange diffchange-inline">In 2021 the theory was shown to easily extend to include the concept and mathematical formulation of Model Unity<ref name=Mason2021>Mason, J. W. (2021), Model Unity and the Unity of Consciousness: Developments in Expected Float Entropy Minimisation. Entropy, 23, 11. doi:10.3390/e23111444</ref>, which is closely related to the unity (and disunity) of consciousness and provides a different notion of integration to that of IIT. The intention behind Model Unity is to answer questions such as, why different individuals do not have shared perception, why individual visual perception is unified and why visual perception is phenomenally very different to auditory perception, for example. </ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> </div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Due to properties such as learning, the brain is very biased toward certain system states and therefore determines typical system states and, in theory, a probability distribution over the set of all system states. This opens up the possibility of applying [https://en.wikipedia.org/wiki/Information_theory information theory] type approaches and EFE <ins class="diffchange diffchange-inline">is a form of expected </ins>conditional entropy <ins class="diffchange diffchange-inline">where </ins>the condition <ins class="diffchange diffchange-inline">involves </ins>relationship parameters. EFE is a measure of the expected amount of information required to specify the state of a system (such as an artificial or [https://en.wikipedia.org/wiki/Neural_circuit biological neural network]) beyond what is already known about the system from the relationship parameters. For certain non-uniformly random systems, particular choices of the relationship parameters are isolated from other choices in the sense that they give much lower Expected Float Entropy values and, therefore, the system defines relationships. <ins class="diffchange diffchange-inline">According to the theory, in </ins>the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience. The principle article (Quasi-Conscious Multivariate Systems<ref name=Mason2016>Mason, J. W. (2016), Quasi-conscious multivariate systems. Complexity, 21: 125-147. doi:10.1002/cplx.21720</ref>) on this mathematical theory was published in 2015 and was followed by the article (From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation<ref name=Mason2019>Mason, J. W. (2019), From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy, 21, 60. doi:10.3390/e21010060</ref>) in 2019. EFE first appeared in a publication in 2012<ref name=Mason2012>Mason, J. W. (2013), Consciousness and the structuring property of typical data. Complexity, 18: 28-37. doi:10.1002/cplx.21431</ref>. </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The nomenclature “Float Entropy” comes from the notion of floating a choice of relationship parameters over a state of a system, similar to the idiom “to float an idea”. Optimisation methods are used in order to obtain the relationship parameters that minimise Expected Float Entropy. A process that performs this minimisation is itself a type of learning method.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The nomenclature “Float Entropy” comes from the notion of floating a choice of relationship parameters over a state of a system, similar to the idiom “to float an idea”. Optimisation methods are used in order to obtain the relationship parameters that minimise Expected Float Entropy. A process that performs this minimisation is itself a type of learning method.</div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l29" >Line 29:</td>
<td colspan="2" class="diff-lineno">Line 35:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>where <math>P</math> is the [https://en.wikipedia.org/wiki/Probability_distribution probability distribution] <math>P:\Omega_{S,V}\to [0,1]</math> determined by the bias of the system due to the long term effect of the system’s inherent learning paradigms in response to external stimulus.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>where <math>P</math> is the [https://en.wikipedia.org/wiki/Probability_distribution probability distribution] <math>P:\Omega_{S,V}\to [0,1]</math> determined by the bias of the system due to the long term effect of the system’s inherent learning paradigms in response to external stimulus.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>According to the theory, a system (such as the brain <del class="diffchange diffchange-inline">and its subregions</del>) defines a particular choice of <math>U</math> and <math>R</math> (up to a certain resolution) under the requirement that the EFE is minimized. Therefore, for a given system (i.e., for a fixed <math>P</math>), solutions in <math>U</math> and <math>R</math> to the equation</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>According to the theory, a system (such as <ins class="diffchange diffchange-inline">a subregion of </ins>the brain) defines a particular choice of <math>U</math> and <math>R</math> (up to a certain resolution) under the requirement that the EFE is minimized. Therefore, for a given system (i.e., for a fixed <math>P</math>), solutions in <math>U</math> and <math>R</math> to the equation</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>: <math>efe(R,U,P)=\min_{R'\in\Psi_{S},\,U'\in\Psi_{V}}efe(R',U',P)</math></div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>: <math>efe(R,U,P)=\min_{R'\in\Psi_{S},\,U'\in\Psi_{V}}efe(R',U',P)</math></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>are the weighted relations of interest. For example, when the theory is applied to digital photographs, U gives the relationships between colours and R gives the relationships that determine the geometry of the field of view.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>are the weighted relations of interest. For example, when the theory is applied to digital photographs, U gives the relationships between colours and R gives the relationships that determine the geometry of the field of view.</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">===Model Unity and the unity of consciousness===</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">The theory of EFE minimisation was extended in 2021 to include the definition of Model Unity<ref name=Mason2021>Mason, J. W. (2021), Model Unity and the Unity of Consciousness: Developments in Expected Float Entropy Minimisation. Entropy, 23, 11. doi:10.3390/e23111444</ref>, which is closely related to the unity (and disunity) of consciousness and provides a different notion of integration to that of IIT.</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">For a given system, let <math>\widehat{S}</math> denote the set of all possible ways to view the system as a collection of subsystems. Let <math>\mathcal{X}\in\widehat{S}</math>, and define</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">:<math>\mu(\mathcal{X},P):=\left(\sum_{X\in\mathcal{X}}\efe(\mathfrak{R}_{X},\mathfrak{U}_{X},P_{X})\right)-\efe(\mathfrak{R},\mathfrak{U},P),</math></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">where each term is individually minimized with respect to the choice of primary models used and the last term is the minimum EFE for the whole system. Furthermore, define</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">:<math>M(P):=\min_{\mathcal{X}\in\widehat{S}}\mu(\mathcal{X},P).</math></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">The definition of Model Unity is then as follows.</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">A system, with probability distribution <math>P:\Omega_{S,V}\mapsto [0,1] </math> giving the probability of finding the system in any given state, has '''Model Unity''' if and only if <math>M(P)\geq 0</math>.</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">The intention behind Model Unity is to answer questions such as, why different individuals do not have shared perception, why individual visual perception is unified and why visual perception is phenomenally very different to auditory perception, for example.</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Connection with ideas in topology===</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Connection with ideas in topology===</div></td></tr>
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=415&oldid=prev82.6.91.120: Subsection removed due to new research showing that EFE is actually a rather special function and is not generally an approximation to the expressions that were shown in this subsection.2021-03-16T10:48:14Z<p>Subsection removed due to new research showing that EFE is actually a rather special function and is not generally an approximation to the expressions that were shown in this subsection.</p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 10:48, 16 March 2021</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l32" >Line 32:</td>
<td colspan="2" class="diff-lineno">Line 32:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>: <math>efe(R,U,P)=\min_{R'\in\Psi_{S},\,U'\in\Psi_{V}}efe(R',U',P)</math></div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>: <math>efe(R,U,P)=\min_{R'\in\Psi_{S},\,U'\in\Psi_{V}}efe(R',U',P)</math></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>are the weighted relations of interest. For example, when the theory is applied to digital photographs, U gives the relationships between colours and R gives the relationships that determine the geometry of the field of view.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>are the weighted relations of interest. For example, when the theory is applied to digital photographs, U gives the relationships between colours and R gives the relationships that determine the geometry of the field of view.</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;"></del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">===Connection with Shannon entropy===</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">The Shannon entropy <math>H</math> of a system is defined as</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">:<math>H:=\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\frac{1}{P(S_{i})}\right)</math>.</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">For <math>U\in\Psi_{V}</math>, <math>R\in\Psi_{S}</math> and <math>A_{S_{i}}:=\{S_{j}\in\Omega_{S,V}\colon d(R,R\{U,S_{j}\})\leq d(R,R\{U,S_{i}\})\}</math> the following equalities holds</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">:<math>\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\frac{1}{P(S_{i}\mid A_{S_{i}})}\right)</math></del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <math>=\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\frac{\sum_{S_{j}\in A_{S_{i}}}P(S_{j})}{P(S_{i})}\right)=H+\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\sum_{S_{j}\in A_{S_{i}}}P(S_{j}) \right)</math>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (1).</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;"></del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">The expression on the left is similar in form to the definition of Shannon entropy. The middle expression reveals the value to be similar to that of <math>efe(R,U,P)</math> when the probabilities in the argument of the logarithm are comparable. Indeed, <math>efe(R,U,P)</math> is an approximation of (1). The expression on the right of (1) shows the mathematical connection to Shannon entropy; the first term is the Shannon entropy <math>H</math> of the system and, with consideration of the log function, the second term has a negative value between <math>-H</math> and 0.</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Connection with ideas in topology===</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Connection with ideas in topology===</div></td></tr>
<!-- diff cache key amcs_wiki-mcswiki_:diff::1.12:old-83:rev-415 -->
</table>82.6.91.120https://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=83&oldid=prevJonathan Mason at 11:33, 4 May 20202020-05-04T11:33:19Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 11:33, 4 May 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l1" >Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the intuitive idea that consciousness may be some kind of minimum entropy interpretation of system states. It is formulated with the aim of explaining (up to relationship isomorphism) how the brain defines the content of consciousness at least with respect to all relationships and associations within subjective experience and the structural content comprised of such relationships. For example, one might ask how the brain defines the perceived geometry of the field of view or the perceived relationships between different colours, or between different audible frequencies. At higher structural levels there are also perceived relationships between different objects and between objects and words for example.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the intuitive idea that consciousness may be some kind of minimum entropy interpretation of system states. It is formulated with the aim of explaining (up to relationship isomorphism) how the brain defines the content of consciousness at least with respect to all relationships and associations within subjective experience and the structural content comprised of such relationships. For example, one might ask how the brain defines the perceived geometry of the field of view or the perceived relationships between different colours, or between different audible frequencies. At higher structural levels there are also perceived relationships between different objects and between objects and words for example.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Due to properties such as learning, the brain is very biased toward certain system states and therefore determines typical system states and, in theory, a probability distribution over the set of all system states. This opens up the possibility of applying [https://en.wikipedia.org/wiki/Information_theory<del class="diffchange diffchange-inline">/ </del>information theory] type approaches and EFE has some similarities with conditional Shannon entropy except the condition involved is comprised of relationship parameters. EFE is a measure of the expected amount of information required to specify the state of a system (such as an artificial or [https://en.wikipedia.org/wiki/Neural_circuit<del class="diffchange diffchange-inline">/ </del>biological neural network]) beyond what is already known about the system from the relationship parameters. For certain non-uniformly random systems, particular choices of the relationship parameters are isolated from other choices in the sense that they give much lower Expected Float Entropy values and, therefore, the system defines relationships. In the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience. The principle article (Quasi-Conscious Multivariate Systems<ref name=Mason2016>Mason, J. W. (2016), Quasi-conscious multivariate systems. Complexity, 21: 125-147. doi:10.1002/cplx.21720</ref>) on this mathematical theory was published in 2015 and was followed by the article (From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation<ref name=Mason2019>Mason, J. W. (2019), From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy, 21, 60. doi:10.3390/e21010060</ref>) in 2019. EFE first appeared in a publication in 2012<ref name=Mason2012>Mason, J. W. (2013), Consciousness and the structuring property of typical data. Complexity, 18: 28-37. doi:10.1002/cplx.21431</ref>. </div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Due to properties such as learning, the brain is very biased toward certain system states and therefore determines typical system states and, in theory, a probability distribution over the set of all system states. This opens up the possibility of applying [https://en.wikipedia.org/wiki/Information_theory information theory] type approaches and EFE has some similarities with conditional Shannon entropy except the condition involved is comprised of relationship parameters. EFE is a measure of the expected amount of information required to specify the state of a system (such as an artificial or [https://en.wikipedia.org/wiki/Neural_circuit biological neural network]) beyond what is already known about the system from the relationship parameters. For certain non-uniformly random systems, particular choices of the relationship parameters are isolated from other choices in the sense that they give much lower Expected Float Entropy values and, therefore, the system defines relationships. In the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience. The principle article (Quasi-Conscious Multivariate Systems<ref name=Mason2016>Mason, J. W. (2016), Quasi-conscious multivariate systems. Complexity, 21: 125-147. doi:10.1002/cplx.21720</ref>) on this mathematical theory was published in 2015 and was followed by the article (From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation<ref name=Mason2019>Mason, J. W. (2019), From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy, 21, 60. doi:10.3390/e21010060</ref>) in 2019. EFE first appeared in a publication in 2012<ref name=Mason2012>Mason, J. W. (2013), Consciousness and the structuring property of typical data. Complexity, 18: 28-37. doi:10.1002/cplx.21431</ref>. </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The nomenclature “Float Entropy” comes from the notion of floating a choice of relationship parameters over a state of a system, similar to the idiom “to float an idea”. Optimisation methods are used in order to obtain the relationship parameters that minimise Expected Float Entropy. A process that performs this minimisation is itself a type of learning method.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The nomenclature “Float Entropy” comes from the notion of floating a choice of relationship parameters over a state of a system, similar to the idiom “to float an idea”. Optimisation methods are used in order to obtain the relationship parameters that minimise Expected Float Entropy. A process that performs this minimisation is itself a type of learning method.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Overview==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Overview==</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Relationships are ubiquitous among mathematical structures. In particular, weighted relations (also called weighted graphs and [https://en.wikipedia.org/wiki/Weighted_network<del class="diffchange diffchange-inline">/ </del>weighted networks]) are very general mathematical objects and, in the finite case, are often handled as adjacency matrices. They are a generalisation of graphs and include all [https://en.wikipedia.org/wiki/Function_(mathematics)<del class="diffchange diffchange-inline">/ </del>functions] since functions are a rather constrained type of [https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)<del class="diffchange diffchange-inline">/ </del>graph]. It is also the case that [[consciousness]] is awash with relationships; for example, red has a stronger relationship to orange than to green, relationships between points in our field of view give rise to geometry, some smells are similar whilst others are very different, and there’s an enormity of other relationships involving many senses such as between the sound of someone’s name, their visual appearance and the timbre of their voice. Expected Float Entropy includes weighted relations as parameters and, for certain non-uniformly random systems, certain choices of weighted relations are isolated from other choices in the sense that they give much lower Expected Float Entropy values. Therefore, systems such as the brain define relationships and, according to the theory, in the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience.</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Relationships are ubiquitous among mathematical structures. In particular, weighted relations (also called weighted graphs and [https://en.wikipedia.org/wiki/Weighted_network weighted networks]) are very general mathematical objects and, in the finite case, are often handled as adjacency matrices. They are a generalisation of graphs and include all [https://en.wikipedia.org/wiki/Function_(mathematics) functions] since functions are a rather constrained type of [https://en.wikipedia.org/wiki/Graph_(discrete_mathematics) graph]. It is also the case that [[consciousness]] is awash with relationships; for example, red has a stronger relationship to orange than to green, relationships between points in our field of view give rise to geometry, some smells are similar whilst others are very different, and there’s an enormity of other relationships involving many senses such as between the sound of someone’s name, their visual appearance and the timbre of their voice. Expected Float Entropy includes weighted relations as parameters and, for certain non-uniformly random systems, certain choices of weighted relations are isolated from other choices in the sense that they give much lower Expected Float Entropy values. Therefore, systems such as the brain define relationships and, according to the theory, in the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Expected Float Entropy minimisation is very general in scope. For example, the theory has been successfully applied in the context to image processing<ref name="Mason2016" /> but also applies to waveform recovery from audio data<ref name="Mason2012" />.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Expected Float Entropy minimisation is very general in scope. For example, the theory has been successfully applied in the context to image processing<ref name="Mason2016" /> but also applies to waveform recovery from audio data<ref name="Mason2012" />.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l17" >Line 17:</td>
<td colspan="2" class="diff-lineno">Line 17:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If <math>S</math> is the set of nodes of a system, such as a neural network, then a state of the system <math>S_{i}</math> is given by the aggregate of the states of the nodes over some range <math>V:=\{v_{1},v_{2},\ldots,v_{m}\}</math> of node states. Therefore each state of the system <math>S_{i}</math> is determined by a corresponding function <math> f_{i}:S\to V</math>. The set of all possible states of the system is denoted <math>\Omega_{S,V}</math>.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If <math>S</math> is the set of nodes of a system, such as a neural network, then a state of the system <math>S_{i}</math> is given by the aggregate of the states of the nodes over some range <math>V:=\{v_{1},v_{2},\ldots,v_{m}\}</math> of node states. Therefore each state of the system <math>S_{i}</math> is determined by a corresponding function <math> f_{i}:S\to V</math>. The set of all possible states of the system is denoted <math>\Omega_{S,V}</math>.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Given an element <math>S_{i}\in\Omega_{S,V}</math>, the above definitions give rise to a [https://en.wikipedia.org/wiki/Canonical_map<del class="diffchange diffchange-inline">/ </del>canonical map] from <math>\Psi_{V}</math> to <math>\Psi_{S}</math>. That is, for <math>U\in\Psi_{V}</math>, the function <math>R\{U,S_{i}\}</math> defined by</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Given an element <math>S_{i}\in\Omega_{S,V}</math>, the above definitions give rise to a [https://en.wikipedia.org/wiki/Canonical_map canonical map] from <math>\Psi_{V}</math> to <math>\Psi_{S}</math>. That is, for <math>U\in\Psi_{V}</math>, the function <math>R\{U,S_{i}\}</math> defined by</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>R\{U,S_{i}\}(a,b):=U(f_{i}(a),f_{i}(b))</math>, for all <math>a,b\in S</math>,</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>R\{U,S_{i}\}(a,b):=U(f_{i}(a),f_{i}(b))</math>, for all <math>a,b\in S</math>,</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>is an element of <math>\Psi_{S}</math>.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>is an element of <math>\Psi_{S}</math>.</div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l23" >Line 23:</td>
<td colspan="2" class="diff-lineno">Line 23:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, the '''Float Entropy''' of a state of the system <math>S_{i}\in\Omega_{S,V}</math>, relative to <math>U</math> and <math>R</math>, is defined as </div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, the '''Float Entropy''' of a state of the system <math>S_{i}\in\Omega_{S,V}</math>, relative to <math>U</math> and <math>R</math>, is defined as </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>fe(R,U,S_{i}):=\log_{2}(\#\{S_{j}\in\Omega_{S,V}\colon d(R,R\{U,S_{j}\})\leq d(R,R\{U,S_{i}\})\})</math>,</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>fe(R,U,S_{i}):=\log_{2}(\#\{S_{j}\in\Omega_{S,V}\colon d(R,R\{U,S_{j}\})\leq d(R,R\{U,S_{i}\})\})</math>,</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>where <math>d</math> is a metric given by a [https://en.wikipedia.org/wiki/Matrix_norm<del class="diffchange diffchange-inline">/ </del>matrix norm] on the elements of <math>\Psi_{S}</math> in matrix form. In the article Quasi-Conscious Multivariate Systems<ref name="Mason2016" /> the <math>L_{1}</math> norm is used. The article also includes a more general definition of Float Entropy called Multirelational Float Entropy and the nodes of the system can be larger structures than individual neurons.</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>where <math>d</math> is a metric given by a [https://en.wikipedia.org/wiki/Matrix_norm matrix norm] on the elements of <math>\Psi_{S}</math> in matrix form. In the article Quasi-Conscious Multivariate Systems<ref name="Mason2016" /> the <math>L_{1}</math> norm is used. The article also includes a more general definition of Float Entropy called Multirelational Float Entropy and the nodes of the system can be larger structures than individual neurons.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The '''Expected Float Entropy (EFE)''' of a system, relative to <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, is defined as</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The '''Expected Float Entropy (EFE)''' of a system, relative to <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, is defined as</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>efe(R,U,P):=\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})fe(R,U,S_{i})</math>,</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>efe(R,U,P):=\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})fe(R,U,S_{i})</math>,</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>where <math>P</math> is the [https://en.wikipedia.org/wiki/Probability_distribution<del class="diffchange diffchange-inline">/ </del>probability distribution] <math>P:\Omega_{S,V}\to [0,1]</math> determined by the bias of the system due to the long term effect of the system’s inherent learning paradigms in response to external stimulus.</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>where <math>P</math> is the [https://en.wikipedia.org/wiki/Probability_distribution probability distribution] <math>P:\Omega_{S,V}\to [0,1]</math> determined by the bias of the system due to the long term effect of the system’s inherent learning paradigms in response to external stimulus.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>According to the theory, a system (such as the brain and its subregions) defines a particular choice of <math>U</math> and <math>R</math> (up to a certain resolution) under the requirement that the EFE is minimized. Therefore, for a given system (i.e., for a fixed <math>P</math>), solutions in <math>U</math> and <math>R</math> to the equation</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>According to the theory, a system (such as the brain and its subregions) defines a particular choice of <math>U</math> and <math>R</math> (up to a certain resolution) under the requirement that the EFE is minimized. Therefore, for a given system (i.e., for a fixed <math>P</math>), solutions in <math>U</math> and <math>R</math> to the equation</div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l43" >Line 43:</td>
<td colspan="2" class="diff-lineno">Line 43:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Connection with ideas in topology===</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>===Connection with ideas in topology===</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>In its simplest form involving only “primary relationships” (i.e. just <math>R</math> and <math>U</math> as shown above) EFE minimisation can also be considered as a generalisation of the [https://en.wikipedia.org/wiki/Initial_topology<del class="diffchange diffchange-inline">/ </del>initial topology] (i.e. weak topology). To see this, the family of functions involved are the typical (probable) system states, the common domain of these functions is the set of system nodes (e.g. neurons, tuples of neurons or larger structures) and the common codomain is the set of node states. In the case of the initial topology a topology is already assumed on the common codomain and the initial topology is then the coarsest topology on the common domain for which the functions are continuous. In the case of EFE minimisation no structure is assumed on either the domain or codomain. Instead EFE minimisation simultaneously finds structures (for us weighted graphs, but topologies could in principle be used) on both the domain and codomain such that the functions are close (in some suitable sense) to being continuous whilst avoiding trivial solutions (such as the two element trivial topology) for which arbitrary improbable functions (system states) would also be continuous. Thus we find the primary relational structures that the system itself defines. In this context objects (visual and auditory) are present and EFE then extends to secondary relationships between such objects by involving correlation for example.</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>In its simplest form involving only “primary relationships” (i.e. just <math>R</math> and <math>U</math> as shown above) EFE minimisation can also be considered as a generalisation of the [https://en.wikipedia.org/wiki/Initial_topology initial topology] (i.e. weak topology). To see this, the family of functions involved are the typical (probable) system states, the common domain of these functions is the set of system nodes (e.g. neurons, tuples of neurons or larger structures) and the common codomain is the set of node states. In the case of the initial topology a topology is already assumed on the common codomain and the initial topology is then the coarsest topology on the common domain for which the functions are continuous. In the case of EFE minimisation no structure is assumed on either the domain or codomain. Instead EFE minimisation simultaneously finds structures (for us weighted graphs, but topologies could in principle be used) on both the domain and codomain such that the functions are close (in some suitable sense) to being continuous whilst avoiding trivial solutions (such as the two element trivial topology) for which arbitrary improbable functions (system states) would also be continuous. Thus we find the primary relational structures that the system itself defines. In this context objects (visual and auditory) are present and EFE then extends to secondary relationships between such objects by involving correlation for example.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Connection to other mathematical theories of consciousness==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Connection to other mathematical theories of consciousness==</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>There are some similarities between the minimisation of Expected Float Entropy and the minimisation of surprise in [https://en.wikipedia.org/wiki/Karl_J._Friston<del class="diffchange diffchange-inline">/ </del>Karl J. Friston]’s [https://en.wikipedia.org/wiki/Free_energy_principle<del class="diffchange diffchange-inline">/ </del>Free energy principle]. The theory is also somewhat complementary to [https://en.wikipedia.org/wiki/Giulio_Tononi<del class="diffchange diffchange-inline">/ </del>Giulio Tononi]’s [https://en.wikipedia.org/wiki/Integrated_information_theory<del class="diffchange diffchange-inline">/ </del>Integrated information theory] (IIT) which was initially developed to quantify consciousness but gave little priority to how systems may define relationships.</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>There are some similarities between the minimisation of Expected Float Entropy and the minimisation of surprise in [https://en.wikipedia.org/wiki/Karl_J._Friston Karl J. Friston]’s [https://en.wikipedia.org/wiki/Free_energy_principle Free energy principle]. The theory is also somewhat complementary to [https://en.wikipedia.org/wiki/Giulio_Tononi Giulio Tononi]’s [https://en.wikipedia.org/wiki/Integrated_information_theory Integrated information theory] (IIT) which was initially developed to quantify consciousness but gave little priority to how systems may define relationships.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== See also ==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== See also ==</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Information_theory<del class="diffchange diffchange-inline">/ </del>Information theory]</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Information_theory Information theory]</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Integrated_information_theory<del class="diffchange diffchange-inline">/ </del>Integrated information theory]</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Integrated_information_theory Integrated information theory]</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Free_energy_principle<del class="diffchange diffchange-inline">/ </del>Free energy principle]</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Free_energy_principle Free energy principle]</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Consciousness]]</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* [[Consciousness]]</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Hard_problem_of_consciousness<del class="diffchange diffchange-inline">/ </del>Hard problem of consciousness]</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Hard_problem_of_consciousness Hard problem of consciousness]</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem<del class="diffchange diffchange-inline">/ </del>Mind–body problem]</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem Mind–body problem]</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Philosophy_of_mind<del class="diffchange diffchange-inline">/ </del>Philosophy of mind]</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* [https://en.wikipedia.org/wiki/Philosophy_of_mind Philosophy of mind]</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== References ==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>== References ==</div></td></tr>
<!-- diff cache key amcs_wiki-mcswiki_:diff::1.12:old-82:rev-83 -->
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=82&oldid=prevJonathan Mason at 11:13, 4 May 20202020-05-04T11:13:00Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 11:13, 4 May 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l9" >Line 9:</td>
<td colspan="2" class="diff-lineno">Line 9:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Expected Float Entropy minimisation is very general in scope. For example, the theory has been successfully applied in the context to image processing<ref name="Mason2016" /> but also applies to waveform recovery from audio data<ref name="Mason2012" />.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Expected Float Entropy minimisation is very general in scope. For example, the theory has been successfully applied in the context to image processing<ref name="Mason2016" /> but also applies to waveform recovery from audio data<ref name="Mason2012" />.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>==Definitions and <del class="diffchange diffchange-inline">connection </del>with <del class="diffchange diffchange-inline">Shannon entropy</del>==</div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>==Definitions and <ins class="diffchange diffchange-inline">connections </ins>with <ins class="diffchange diffchange-inline">some areas of mathematics</ins>==</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Definitions===</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Definitions===</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For a nonempty set <math>S</math>, a weighted relation on <math>S</math> is a function of the form</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For a nonempty set <math>S</math>, a weighted relation on <math>S</math> is a function of the form</div></td></tr>
<tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l41" >Line 41:</td>
<td colspan="2" class="diff-lineno">Line 41:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The expression on the left is similar in form to the definition of Shannon entropy. The middle expression reveals the value to be similar to that of <math>efe(R,U,P)</math> when the probabilities in the argument of the logarithm are comparable. Indeed, <math>efe(R,U,P)</math> is an approximation of (1). The expression on the right of (1) shows the mathematical connection to Shannon entropy; the first term is the Shannon entropy <math>H</math> of the system and, with consideration of the log function, the second term has a negative value between <math>-H</math> and 0.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The expression on the left is similar in form to the definition of Shannon entropy. The middle expression reveals the value to be similar to that of <math>efe(R,U,P)</math> when the probabilities in the argument of the logarithm are comparable. Indeed, <math>efe(R,U,P)</math> is an approximation of (1). The expression on the right of (1) shows the mathematical connection to Shannon entropy; the first term is the Shannon entropy <math>H</math> of the system and, with consideration of the log function, the second term has a negative value between <math>-H</math> and 0.</div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;"></ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">===Connection with ideas in topology===</ins></div></td></tr>
<tr><td colspan="2"> </td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><ins style="font-weight: bold; text-decoration: none;">In its simplest form involving only “primary relationships” (i.e. just <math>R</math> and <math>U</math> as shown above) EFE minimisation can also be considered as a generalisation of the [https://en.wikipedia.org/wiki/Initial_topology/ initial topology] (i.e. weak topology). To see this, the family of functions involved are the typical (probable) system states, the common domain of these functions is the set of system nodes (e.g. neurons, tuples of neurons or larger structures) and the common codomain is the set of node states. In the case of the initial topology a topology is already assumed on the common codomain and the initial topology is then the coarsest topology on the common domain for which the functions are continuous. In the case of EFE minimisation no structure is assumed on either the domain or codomain. Instead EFE minimisation simultaneously finds structures (for us weighted graphs, but topologies could in principle be used) on both the domain and codomain such that the functions are close (in some suitable sense) to being continuous whilst avoiding trivial solutions (such as the two element trivial topology) for which arbitrary improbable functions (system states) would also be continuous. Thus we find the primary relational structures that the system itself defines. In this context objects (visual and auditory) are present and EFE then extends to secondary relationships between such objects by involving correlation for example.</ins></div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Connection to other mathematical theories of consciousness==</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Connection to other mathematical theories of consciousness==</div></td></tr>
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=81&oldid=prevJonathan Mason at 08:54, 4 May 20202020-05-04T08:54:41Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 08:54, 4 May 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l21" >Line 21:</td>
<td colspan="2" class="diff-lineno">Line 21:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>is an element of <math>\Psi_{S}</math>.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>is an element of <math>\Psi_{S}</math>.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>For <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, the '''<del class="diffchange diffchange-inline">Eloat </del>Entropy''' of a state of the system <math>S_{i}\in\Omega_{S,V}</math>, relative to <math>U</math> and <math>R</math>, is defined as </div></td><td class='diff-marker'>+</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>For <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, the '''<ins class="diffchange diffchange-inline">Float </ins>Entropy''' of a state of the system <math>S_{i}\in\Omega_{S,V}</math>, relative to <math>U</math> and <math>R</math>, is defined as </div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>fe(R,U,S_{i}):=\log_{2}(\#\{S_{j}\in\Omega_{S,V}\colon d(R,R\{U,S_{j}\})\leq d(R,R\{U,S_{i}\})\})</math>,</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:<math>fe(R,U,S_{i}):=\log_{2}(\#\{S_{j}\in\Omega_{S,V}\colon d(R,R\{U,S_{j}\})\leq d(R,R\{U,S_{i}\})\})</math>,</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>where <math>d</math> is a metric given by a [https://en.wikipedia.org/wiki/Matrix_norm/ matrix norm] on the elements of <math>\Psi_{S}</math> in matrix form. In the article Quasi-Conscious Multivariate Systems<ref name="Mason2016" /> the <math>L_{1}</math> norm is used. The article also includes a more general definition of Float Entropy called Multirelational Float Entropy and the nodes of the system can be larger structures than individual neurons.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>where <math>d</math> is a metric given by a [https://en.wikipedia.org/wiki/Matrix_norm/ matrix norm] on the elements of <math>\Psi_{S}</math> in matrix form. In the article Quasi-Conscious Multivariate Systems<ref name="Mason2016" /> the <math>L_{1}</math> norm is used. The article also includes a more general definition of Float Entropy called Multirelational Float Entropy and the nodes of the system can be larger structures than individual neurons.</div></td></tr>
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=80&oldid=prevJonathan Mason at 21:34, 3 May 20202020-05-03T21:34:35Z<p></p>
<table class="diff diff-contentalign-left" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">← Older revision</td>
<td colspan="2" style="background-color: #fff; color: #222; text-align: center;">Revision as of 21:34, 3 May 2020</td>
</tr><tr><td colspan="2" class="diff-lineno" id="mw-diff-left-l1" >Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;">Expected Float Entropy Minimisation</del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'>−</td><td style="color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div><del style="font-weight: bold; text-decoration: none;"></del></div></td><td colspan="2"> </td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the intuitive idea that consciousness may be some kind of minimum entropy interpretation of system states. It is formulated with the aim of explaining (up to relationship isomorphism) how the brain defines the content of consciousness at least with respect to all relationships and associations within subjective experience and the structural content comprised of such relationships. For example, one might ask how the brain defines the perceived geometry of the field of view or the perceived relationships between different colours, or between different audible frequencies. At higher structural levels there are also perceived relationships between different objects and between objects and words for example.</div></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>'''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the intuitive idea that consciousness may be some kind of minimum entropy interpretation of system states. It is formulated with the aim of explaining (up to relationship isomorphism) how the brain defines the content of consciousness at least with respect to all relationships and associations within subjective experience and the structural content comprised of such relationships. For example, one might ask how the brain defines the perceived geometry of the field of view or the perceived relationships between different colours, or between different audible frequencies. At higher structural levels there are also perceived relationships between different objects and between objects and words for example.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f8f9fa; color: #222; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"></td></tr>
</table>Jonathan Masonhttps://wiki.amcs.science/index.php?title=Expected_Float_Entropy_Minimisation&diff=79&oldid=prevJonathan Mason: Created page with "Expected Float Entropy Minimisation '''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the intu..."2020-05-03T21:32:03Z<p>Created page with "Expected Float Entropy Minimisation '''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the intu..."</p>
<p><b>New page</b></p><div>Expected Float Entropy Minimisation<br />
<br />
'''Expected Float Entropy Minimisation (EFE)''' is a mathematically formulated model of consciousness that follows naturally from the intuitive idea that consciousness may be some kind of minimum entropy interpretation of system states. It is formulated with the aim of explaining (up to relationship isomorphism) how the brain defines the content of consciousness at least with respect to all relationships and associations within subjective experience and the structural content comprised of such relationships. For example, one might ask how the brain defines the perceived geometry of the field of view or the perceived relationships between different colours, or between different audible frequencies. At higher structural levels there are also perceived relationships between different objects and between objects and words for example.<br />
<br />
Due to properties such as learning, the brain is very biased toward certain system states and therefore determines typical system states and, in theory, a probability distribution over the set of all system states. This opens up the possibility of applying [https://en.wikipedia.org/wiki/Information_theory/ information theory] type approaches and EFE has some similarities with conditional Shannon entropy except the condition involved is comprised of relationship parameters. EFE is a measure of the expected amount of information required to specify the state of a system (such as an artificial or [https://en.wikipedia.org/wiki/Neural_circuit/ biological neural network]) beyond what is already known about the system from the relationship parameters. For certain non-uniformly random systems, particular choices of the relationship parameters are isolated from other choices in the sense that they give much lower Expected Float Entropy values and, therefore, the system defines relationships. In the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience. The principle article (Quasi-Conscious Multivariate Systems<ref name=Mason2016>Mason, J. W. (2016), Quasi-conscious multivariate systems. Complexity, 21: 125-147. doi:10.1002/cplx.21720</ref>) on this mathematical theory was published in 2015 and was followed by the article (From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation<ref name=Mason2019>Mason, J. W. (2019), From Learning to Consciousness: An Example Using Expected Float Entropy Minimisation. Entropy, 21, 60. doi:10.3390/e21010060</ref>) in 2019. EFE first appeared in a publication in 2012<ref name=Mason2012>Mason, J. W. (2013), Consciousness and the structuring property of typical data. Complexity, 18: 28-37. doi:10.1002/cplx.21431</ref>. <br />
<br />
The nomenclature “Float Entropy” comes from the notion of floating a choice of relationship parameters over a state of a system, similar to the idiom “to float an idea”. Optimisation methods are used in order to obtain the relationship parameters that minimise Expected Float Entropy. A process that performs this minimisation is itself a type of learning method.<br />
<br />
==Overview==<br />
Relationships are ubiquitous among mathematical structures. In particular, weighted relations (also called weighted graphs and [https://en.wikipedia.org/wiki/Weighted_network/ weighted networks]) are very general mathematical objects and, in the finite case, are often handled as adjacency matrices. They are a generalisation of graphs and include all [https://en.wikipedia.org/wiki/Function_(mathematics)/ functions] since functions are a rather constrained type of [https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)/ graph]. It is also the case that [[consciousness]] is awash with relationships; for example, red has a stronger relationship to orange than to green, relationships between points in our field of view give rise to geometry, some smells are similar whilst others are very different, and there’s an enormity of other relationships involving many senses such as between the sound of someone’s name, their visual appearance and the timbre of their voice. Expected Float Entropy includes weighted relations as parameters and, for certain non-uniformly random systems, certain choices of weighted relations are isolated from other choices in the sense that they give much lower Expected Float Entropy values. Therefore, systems such as the brain define relationships and, according to the theory, in the context of these relationships a brain state acquires meaning in the form of the relational content of the corresponding experience.<br />
Expected Float Entropy minimisation is very general in scope. For example, the theory has been successfully applied in the context to image processing<ref name="Mason2016" /> but also applies to waveform recovery from audio data<ref name="Mason2012" />.<br />
<br />
==Definitions and connection with Shannon entropy==<br />
=== Definitions===<br />
For a nonempty set <math>S</math>, a weighted relation on <math>S</math> is a function of the form<br />
:<math>R:S^{2}\to[0,1]</math>.<br />
Such a weighted relation is called reflexive if <math>R(a,a)=1</math> for all <math>a\in S</math>, and symmetric if <math>R(a,b)=R(b,a)</math> for all <math>a,b\in S</math>. The set of all reflexive, symmetric weighted relations on <math>S</math> is denoted <math>\Psi_{S}</math>.<br />
<br />
If <math>S</math> is the set of nodes of a system, such as a neural network, then a state of the system <math>S_{i}</math> is given by the aggregate of the states of the nodes over some range <math>V:=\{v_{1},v_{2},\ldots,v_{m}\}</math> of node states. Therefore each state of the system <math>S_{i}</math> is determined by a corresponding function <math> f_{i}:S\to V</math>. The set of all possible states of the system is denoted <math>\Omega_{S,V}</math>.<br />
<br />
Given an element <math>S_{i}\in\Omega_{S,V}</math>, the above definitions give rise to a [https://en.wikipedia.org/wiki/Canonical_map/ canonical map] from <math>\Psi_{V}</math> to <math>\Psi_{S}</math>. That is, for <math>U\in\Psi_{V}</math>, the function <math>R\{U,S_{i}\}</math> defined by<br />
:<math>R\{U,S_{i}\}(a,b):=U(f_{i}(a),f_{i}(b))</math>, for all <math>a,b\in S</math>,<br />
is an element of <math>\Psi_{S}</math>.<br />
<br />
For <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, the '''Eloat Entropy''' of a state of the system <math>S_{i}\in\Omega_{S,V}</math>, relative to <math>U</math> and <math>R</math>, is defined as <br />
:<math>fe(R,U,S_{i}):=\log_{2}(\#\{S_{j}\in\Omega_{S,V}\colon d(R,R\{U,S_{j}\})\leq d(R,R\{U,S_{i}\})\})</math>,<br />
where <math>d</math> is a metric given by a [https://en.wikipedia.org/wiki/Matrix_norm/ matrix norm] on the elements of <math>\Psi_{S}</math> in matrix form. In the article Quasi-Conscious Multivariate Systems<ref name="Mason2016" /> the <math>L_{1}</math> norm is used. The article also includes a more general definition of Float Entropy called Multirelational Float Entropy and the nodes of the system can be larger structures than individual neurons.<br />
<br />
The '''Expected Float Entropy (EFE)''' of a system, relative to <math>U\in\Psi_{V}</math> and <math>R\in\Psi_{S}</math>, is defined as<br />
:<math>efe(R,U,P):=\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})fe(R,U,S_{i})</math>,<br />
where <math>P</math> is the [https://en.wikipedia.org/wiki/Probability_distribution/ probability distribution] <math>P:\Omega_{S,V}\to [0,1]</math> determined by the bias of the system due to the long term effect of the system’s inherent learning paradigms in response to external stimulus.<br />
<br />
According to the theory, a system (such as the brain and its subregions) defines a particular choice of <math>U</math> and <math>R</math> (up to a certain resolution) under the requirement that the EFE is minimized. Therefore, for a given system (i.e., for a fixed <math>P</math>), solutions in <math>U</math> and <math>R</math> to the equation<br />
: <math>efe(R,U,P)=\min_{R'\in\Psi_{S},\,U'\in\Psi_{V}}efe(R',U',P)</math><br />
are the weighted relations of interest. For example, when the theory is applied to digital photographs, U gives the relationships between colours and R gives the relationships that determine the geometry of the field of view.<br />
<br />
===Connection with Shannon entropy===<br />
The Shannon entropy <math>H</math> of a system is defined as<br />
:<math>H:=\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\frac{1}{P(S_{i})}\right)</math>.<br />
For <math>U\in\Psi_{V}</math>, <math>R\in\Psi_{S}</math> and <math>A_{S_{i}}:=\{S_{j}\in\Omega_{S,V}\colon d(R,R\{U,S_{j}\})\leq d(R,R\{U,S_{i}\})\}</math> the following equalities holds<br />
:<math>\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\frac{1}{P(S_{i}\mid A_{S_{i}})}\right)</math><br />
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <math>=\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\frac{\sum_{S_{j}\in A_{S_{i}}}P(S_{j})}{P(S_{i})}\right)=H+\sum_{S_{i}\in\Omega_{S,V}}P(S_{i})\log_{2}\left(\sum_{S_{j}\in A_{S_{i}}}P(S_{j}) \right)</math>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (1).<br />
<br />
The expression on the left is similar in form to the definition of Shannon entropy. The middle expression reveals the value to be similar to that of <math>efe(R,U,P)</math> when the probabilities in the argument of the logarithm are comparable. Indeed, <math>efe(R,U,P)</math> is an approximation of (1). The expression on the right of (1) shows the mathematical connection to Shannon entropy; the first term is the Shannon entropy <math>H</math> of the system and, with consideration of the log function, the second term has a negative value between <math>-H</math> and 0.<br />
<br />
==Connection to other mathematical theories of consciousness==<br />
There are some similarities between the minimisation of Expected Float Entropy and the minimisation of surprise in [https://en.wikipedia.org/wiki/Karl_J._Friston/ Karl J. Friston]’s [https://en.wikipedia.org/wiki/Free_energy_principle/ Free energy principle]. The theory is also somewhat complementary to [https://en.wikipedia.org/wiki/Giulio_Tononi/ Giulio Tononi]’s [https://en.wikipedia.org/wiki/Integrated_information_theory/ Integrated information theory] (IIT) which was initially developed to quantify consciousness but gave little priority to how systems may define relationships.<br />
<br />
== See also ==<br />
* [https://en.wikipedia.org/wiki/Information_theory/ Information theory]<br />
* [https://en.wikipedia.org/wiki/Integrated_information_theory/ Integrated information theory]<br />
* [https://en.wikipedia.org/wiki/Free_energy_principle/ Free energy principle]<br />
* [[Consciousness]]<br />
* [https://en.wikipedia.org/wiki/Hard_problem_of_consciousness/ Hard problem of consciousness]<br />
* [https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem/ Mind–body problem]<br />
* [https://en.wikipedia.org/wiki/Philosophy_of_mind/ Philosophy of mind]<br />
<br />
== References ==</div>Jonathan Mason