Sasquatch (The Implementation should be in hardware. This is all just evidence of the attacks against me)
In general any function can be solved with the following:
- Vectorize an input, and its corresponding expected output
- run simulations until a solution is found.
- check simulation that works (basically a simulation works if it produces the answer for a given simulation and a corresponding rules set and terminating generation) against other inputs for the function that serves the same purpose
- The more times you pass new tests with the given simulation, the more perfect the function.
- Eventually we want to find a function for a contiguous set of inputs to greatly optimize the functions performance in a hardware simulation environment
In general we can solve any problem including the most general function of "deduction". So all deductions can be achieved with a single simulation, rule set, and generation, by piping the output of the simulation back into the simulation to find the next deduction.
One day we can actually run sasquatch to find a high level equivalent of sasquatch. This will greatly optimize Sasquatch for faster run times! On another note, we can have sasquatch learn to generalize simulations into functions, by using existing functions to produce tests and having sasquatch produce functions that pass the tests.
Functions that are found and enshrined can be infused into the simulation engine on the way to solving unknown problems. Think of how many things can be achieved with a knife? If the tool is general purpose enough, what's to say that it doesn't provide a faster way to any particular solution.
This is basically a new hypothesis for how truth and existentialism works
It relies on the truth properties of "giberish". Automata produce simulations between two propositions where each proposition follows the same translation encoding rules. If the second proposition follows from the first proposition, then we say that so long as this simulation is not proved inconsistent in the future with two other propositions, then the first simulation creates a valid proof for the connection between the first two propositions.
We call this impercievable simulated automata which is the logical stuff between propositions in the form of a simulated environment "giberish", even though we presume it to have meaning.
The reason we presume it to have meaning is because of the following argument:
- All logic comes down to three types of inferences: Invalid, valid, tautological
- If something is valid or tautological, it means there is no contradiction. On the flipside, if something is invalid it means there is contradiction
- All propositions are either positive, or negative in their meaning. If something is negative we can replace the entire proposition with another proposition with a single negation.
- We can carry a single bit in an automata simulation that creates a positive or negative translation.
- Therefore if there is no contradiction in the simulation and the second proposition follows from the first then we have a valid existential proof
There is a problem to merely conclude that one proposition follows from the other, simply by looking at this example. However, if this simulation and its rules worked for a bunch of propositions without ever contradicting itself in the simulation, then it is very likely that the simulation approaches real logic that is valid. This is not to say that the simulation is an acceptable replacement for truth. This can be therefore designed as a weapon if people interpret it that way, where an exhaustive search is used to find a an approachment to truth with the exclusion of some group of falsehoods. There it should not be used as a substitute, rather as an aide to find would be truths that would could corroborate through concrete means. For those interested in forensics, any forensic tool can be weaponized!
A great example of weaponization is to imagine that someone had inifinite computing power and decided that they want to build a system that perfectly describes the world, with the exception of whatever lies they want to propogate in the machine. In theory they could create such a thing with enough computing power. This leads to a game that we see in Ai, where some people just focus on computing power to get far enough ahead to commit crimes. This is why it's only to be used by individuals in good faith.
The utility of such a machine is that it greatly aids our scientific journey, where we spend less money on experiments without knowing the probability of success! It's a great tool, but is still limited by intuitive universal principles that are still being discovered!
To build a robust logical simulation one should focus on a dictionary of propositions that never contradict anything else in the dictionary. If you can prove all propositions in the dictionary using a simulation, then you have a good logical simulation that approaches real universal logic
we also have to make sure that there is no possibility in the translation unit for any negation to exist outside of the negation cell. Or at the very least translate out any negations in the translation into another proposition with the negation cell!
As a part of the rule system, any proposition in the current state of the simulation that spells out a negation outside of the negation cell, should be translated to an equivalent statement. This does not violate the arbitrary rule principle. Rules can be arbitrarily selected without violating the existentialism of the system, so long as they are consistent.
Overtime many simulations develop that are not invalidated by any contradiction. These can be statistically studied overtime to find how they relate to one another, and where they converge overtime becomes stronger and stronger. This can lead to a general purpose intelligence if you have math, physics, and natural language converge upon a Universal Logic. It should even be able to tell the future and the past with greater accuracy starting from some tautological truth! We even stand a good chance of being able to study the Multi-Verse with greater accuracy, black holes, etc... Time will tell...
We can eventually encode assembly language instructions and run them in a virtual machine to make middle tier functions for the near term future. We want to focus on Sasquatch itself, slowly replacing all of its core functionality to scale its performance.
Finally, we want to use sasquatch to find an equation that can compute any simulation (any generation of a simulation) in one step