This was supposed to be a status update, but then it got way too long.

So you know the a vs. an rule? I don’t remember when exactly I learned the rule. I don’t even remember the rule, and I chose not to google it for this demonstration. But the way I’ve always followed the rule is by if it “sounds right.” That’s not a very specific way to follow a rule, is it?

But let’s see if it works.

Sounds right

An apple…
An orange…
A grapefruit…
A car…
A person…
A lion…
A bee…
An airplane…
An umbrella…
An interesting fact…
An Ovienmhada…
A Smith…
A Johnson…

Sounds wrong

A apple…
A orange…
An grapefruit…
An car…
An person…
An lion…
An bee…
A airplane…
A umbrella…
A interesting fact…
A Ovienmhada…
An Smith…
An Johnson…

So how’d I do?

The rule

I think the rule is if a word starts with a vowel, you use “an” before it. If it starts with a consonant, you use “a” before it. I can think of exceptions, though, like unicorn and x-ray.

“An unicorn” follows the rule I outlined above, but somehow it sounds (or feels) wrong. “A unicorn” sounds correct.

“A x-ray” also follows the rule, but “an x-ray” sounds right.

So basically, I’ve deduced a rule that may or may not be real (again, I didn’t look up the answer on purpose).

So how does my brain have a high degree of confidence that my “gut instinct” is right? When I write, I don’t think about a vs. an. I don’t work out or consciously process any rules, but somehow my brain follows these rules.

It’s not like adding two numbers together, I consciously have to add two numbers together. I do the a vs. an rule in a fraction of a second without thinking.

How?

You know what this is? It’s how “neural networking” works. Machine learning via neural networks is where you give a robot a task, and it does it completely randomly. Then when it does it correctly, you strengthen whatever network produced the correct result and repeat ad infinitum. Then the probabilistic likelihood of the robot getting the “right answer” increases, because the network that produces the desired behavior gets stronger every time the robot gets it right. Eventually, the robot will only get it right.

For example: shooting a basketball through a hoop. Eventually the robot will figure out a way to get it in the hoop, and will never miss, as long as it doesn’t run out of energy or break, and the variables don’t change (like wind speed, etc.)

That’s what my brain is doing with the a vs. an rule. It doesn’t know the rule even though it might have learned it at some point. I couldn’t write down the rule for you, and what I have above is my best guess. I could be wrong, or missing come corollary.

That is how I can type an article at 80 wpm, and not even give thought to the a vs. an rule, and still get it right most of the time.

That’s how you can be driving and listening to a radio program, and then “wake up” 10 minutes later at your destination and say, “Wait…how did I get here? I don’t remember the last 10 minutes of scenery. Who was driving the car?”

Your subconscious mind can drive, and walk, and avoid obstacles, and even obey verbal commands (sleepwalkers do this), and answer questions, all, without you actually giving it conscious instruction to do so. You have a Siri and Google self driving car living inside your head.

What if that “feeling” you’ve got it right is just your brain following the most strengthened neural pathway, and the “feeling” you’ve got it wrong is your brain resisting the urge to follow a less strong neural pathway.

That might be a way to mathematically quantify an emotion.