The process of building an artificial intelligence will be tedious, exhausting, and extremely rewarding by the end. Instead of seeing the general broad sweeps when I tried to overview the sciences involved in building an artificial intelligence, it's very clear for me. Even the mathematics involved is very clear.

This is all because I've had a huge foundation of knowledge in which I've searched through. I can mentally search through files of what I think is the science for the question.

But this document is meant to outline my pursuit towards the creation of an artificial intelligence. There are two methods of pursuit; first is the acquirement of information, and the second is the use of that information to build something, third is fine-tuning it and reporting on its success.

In the first phase, I'll be building models of this intelligence to create a whole system that can be translated into a machine (mathematics/logical models). Off the bat you can see graph theory, set theory, linear algebra/matrices, systems theory, coding theory, computational semantics etc. just in the translation. To be honest, I believe we can get past computational semantics, this building up of an artificial intelligence in my mind will be very low level avoiding the annoyance of higher-level programming. I might try to figure out C at most, but again I'm more concerned in this phase with translating biological machinery with mathematical notation, including with semantics. There are some things that need to be similar between them, off the top of my head concurrence is the largest fucking thing to focus on, and creating a model that utilizes concurrent processing is hugely important but I digress.

First phase is in of itself is a pain in the ass. Because it's the first step, it will receive the most focus in this planning.

The second phase is to actually build the the thing. There is overlap in translation/design. While I'll collect all the research required, I'll still need to make it math in the first step. I'll likely be using a cluster computer in order to satisfy the concurrent processing. This will likely change depending on how I figure out how the brain works. Specifically, does each process in the brain contain its own memory, or does it utilize a central process? What if both? If it turns out there's a central process, it may be more useful to just sell my kidneys and buy a single computer with something like 128 cores and 256 threads. It's at the cost of the modality. Who knows, this is why I'm not planning this out that much until I learn neuroscience, and then learn hardware-software interface.

It's a bit illusionary when I say design and build, because this also requires just as much research into computing as it does in the first instance. When I'm designing, I'm likely trying to learn how to program C/C++ and assembly at the same time (or learn how to build my own language if need be) after I choose the architecture, and after I play around with setting up a cluster computer or building up a test operating system with the AI in mind. I'll try to penetrate this field through first the bottom up, and then second through machine learning. When I learn the math in the first phase fuck it man I'm learning machine learning.

The only sure thing about the second phase is the diagnostic tools and smart programming required to set it up.

The third phase is the report/analysis of the machine I end up building. This will be determined again in the first phase when I research deeper things in the philosophy of mind and create a form of feedback/diagnostic measure with regards to consciousness. The clearest way to determine it would be to, in my eyes, interactive tests with the artificial intelligence. Not necessarily a Turing test, but turing tests, logic tests, emotions tests, sympathy tests, etc. Utilizing psychology, for example motivation psychology tests, to determine if its behaviour is identical with that of human behaviour, or if there are flaws, see if its identical with certain human flaws.

It's likely that I design the AI clear of heuristics and bias. I'll definitely add a reward system, as that is essentially the reason people have any form of meaning. There is an interesting philosophical question on whether an intelligence can be intelligent without self-interest, which I'll have to design in the first phase. What form of drive mechanism should the AI seek? Clearly one related to diagnostics would be fun to think about from literally physical diagnostic measures such voltage, power levels, etc. to complex processing of specific forms of sensations such as visual and auditory stimulation. It's definitely an interesting question for its own time.

There's also a point to make that the first iteration of this AI will be animal-like intelligence. By necessity it'll require social-like intelligence. When it sees a human figure, it must interact according to intelligent interests, such as using a human charge. It must learn, observe what a person is doing. It would make a nice pet, which could be a selling point and bring me more money to build up a true human-level intelligence one day.

PHASE 1: RESEARCH AND TRANSLATION PAIN

In figuring out the systems required, we need some universal application and knowledge first which should be looked into.

The contemporary philosophies:

• Just by studying the philosophy of science and of logic, I've found the universal key that unlocks an understanding of what math means, and I know what math means because I've read the philosophy on metaphysics. It's just extremely extremely divine material to go through.

Mathematics:

• This is fundamental and necessary to the creation of models and systems. If I do not understand graph theory then there is no way in hell I can make a model of semantics or neuroscience that can be beautifully translated into a machine.

◦ Graph Theory: Basically important for my form of semantic analysis.

◦ Linear algebra: Important for neural networks.

◦ Network Theory: requires graph theory, important shit

◦ Systems theory: requires math

◦ Theory of Computation: is basically computer math

Applicative knowledge:

Neuroscience and psychology:

• These two are opposites that must be met together in the form of biological psychology and its inverse, behavioural neuroscience Social biases are hardly going to be logical through mathematical analysis, it may logical as an evolved characteristic as a heuristic or a way to survive social competition, but in of itself I won't find it through a logical analysis of individual self-interested actors.

• We'll leave intelligent processing later on in the first phase, for now I want to focus primarily on unintelligent systems seen in animals, specifically input/output mechanisms. This is the first phase of translating

• Processing

Linguistics, semantics, sociology:

• this is the direct interaction between semantics and graph theory. I need a holistic system, this means outgrowing the individual and looking at society through that lense. Through my own attempt to build a math of imperative analysis, I've found how important descriptive are to our understanding of the world. It may as well be a method of analyzing words as well.

• I need to learn semantics as a sort of paradox. I need to learn logic to learn semantics but i need to learn semantics to gain a true understanding of logic. It's very funny.

• Language, and solving the problems behind semantics is the key to the divine, I wholly believe this. The interaction between language and other processes is the key to this.

• Psycho-linguistics is where god is.

Basic understanding of computers:

• While the other fields I'm going balls deep into, a basic understanding of computers is just the amount of information necessary for me to understand.

• For example, I use cluster computing but what I really mean is distributed computing, or perhaps a hybrid of the two.

• A textbook on hardware-software interface, and a textbook on operating systems should be enough for me to get an idea of what I want to write up models for.

I'll add to this list the further down the abyss I go. So this is where I'm basically researching, finding information, and utilizing it for the next part of the first phase. This list isn't necessarily linear, I won't start with philosophies and go to math and then psychology. I believe in referential knowledge, so long as I have the connections, I have the ability to return and more rigorously apply a new perspective. Knowledge I've learned is decentralized, so I'm not worried about prerequisites so much anymore except in absolute cases.

I already have a system of reading, but a new one I'm placing just for this is this reading in these short sections and papers so that I'm learning a new concept every day. I want to do one every-day, doing any questions it asks of me on the back of the pages. Psychology I'm learning already in school and neuroscience I'm about to learn with this one guy. Mathematics I'm working on my own as well. The only fields left for research not being read are the philosophies and logic.

Philosophies will have to be read later after I figure out my study program. I'll set it for next year when philosophy becomes a larger part of my reading. But I digress here I'll figure it out.

***

Problem Solving: If someone would've created an AI, they would. The issue is that there are issues that need to be solved with regards to this. I'll write the problems down here and figure out an answer to them. Things like the binding problem need to be researched before they move on, but some I can answer right now.

***

"Intelligence Theory"

I've developed an interesting method of analysis between intelligent systems that can be used to create the framework for a completely intelligent creature in the context of its environment and social group, but it holds a lot of things missing, namely semantic structure without an intelligent system's collection of axioms and schemas and descriptive sets, and the processes that they undertake.

It's my hope that this may be the way in which I can create a holistic system that integrates the individual with the social. This

***

Issue of input-function-output:

It's more complicated by that, I knew it from the point of writing it. So how do I approach this? My new idea I want to write very quickly is to approach it from two methods: psychology/high-level and low-level. Again, I'll be figuring out the in-betweens that don't overlap.

I don't necessarily have to recreate the brain, I have to recreate the brain as it is according to theory.

***

A motivation system is necessary for actions to be pursued:

A motivation system is necessary for certain actions to be pursued, as the development of certain senses and the uses of certain mechanics seems to be unnecessary with regards to how they utilize it. As certain parts of the brain are heavily dynamic, it begs the question why do they develop specifically for a certain act, and it all relates to motivation.

A breakdown of how the body goes from "your blood sugar levels are low" to "I want to eat food" and pursuing that behaviour. I say pleasure is the reason why, but it still doesn't answer how pleasure goes from that to pursuit. What is want, and why is it intangible? And how do we decide to choose to go after what we want? We feel pleasure, and motivation, and it allows us to choose what we want, but we still feel it.

Is this is the idea of focus, or attention?

Motivation/decision making is thankfully by nature an easy to model process. There have to be stages to decision making. Between two types of food, focus means an animal can only focus its senses or thought on one. Thoughts are by nature virtual memories/senses. When an animal sees an apple, it imagines eating an apple. When an animal sees a banana, it imagines eating a banana. These virtual senses become the forefront of attention.

Because only one sense can be made at a time, it can become something like a person sees an apple and pear, but apple has a bias for whatever reason because they want something sweet and sour, or their memory has poor preference for a banana, such as the trouble of the peel. They remember the event, the characteristics and memory.

This is taken into the decision making apparatus one at a time, and is tested against a multitude of factors: the possibility of committing the act, the consequences of committing the act (which follows as its own form of memory), the sensation of committing the act. Once these three conditions are met, it seems that pursuit happens, and the act of pursuit has its own shit where at certain conditionals, the brain must re-evaluate its motivaiton. Namely the possibility and the consequence.

This all always returns to my old theory on motivation. But that isn't that good, it must integrate with theories of motivation, and my own thoughts behind it.

I can't also forget parallel processes, for example the act of

***

Context is undoubtedly important as fuck in determining inductive reasoning:

What I mean by context is the brain's ability to place itself in the context of the world. Our perspective mirrors that of what we see. We hold a map of the world, and our decision making has to access its orientation. In novel places, we are actively seeking to maintain a map of the world. So the senses need to continually configure this map.

***

It seems I need to actually learn computer science: As I look into this more and more, I'm realizing that it's very necessary to begin studying computer science. As I read, I'm building the artificial intelligence in my head using computers and searching up cluster computing. I'll research what I want when I need to as an extra behind my more systematized reading in school/psychology and mathematics.