It's Friday morning. I bake the "Family Favorite Chocolate Cake," because it's *delicious* and I

It's Friday morning. I bake the "Family Favorite Chocolate Cake," because it's *delicious* and I haven't baked it in years.

I then eat half of it over the next three days. Because it's *delicious*.

That third day? Yeah, I didn't really eat anything else, not even my habitual bacon-and-eggs breakfast. "Cake makes a better breakfast," I thought. After my cake dinner and almost an hour of relaxing, I exercised a bit, and suddenly crashed hard: feeling exhausted, difficult to move limbs, difficult to think clearly. My girlfriend Jeannie was with me at the time, and to her, I simply appeared to "zone out" for a bit. She was unaware of how severe my exhaustion felt.

Jeannie and I were planning to head to Authentic Relating Games in separate cars. I told her to go on, that I was fine. Fifteen minutes later, right before I walked out the door, I felt unable to stay standing, so I went to bed and napped.

Some time later, my phone ringing in my pocket wakes me up. I fumble with my pants for a while, but can't get the phone out, so I give up and go back to sleep.

About an hour and a half after I started sleeping, she's back next to me, waking me up, trying to figure out why I never made it to games, why I'm just lying there, looking at her but not speaking. I was trying to speak, but the muscles just weren't doing it. I was trying to reach out to her, but couldn't lift my arm. After 15-20 minutes, I manage to start moving enough to grab my Chromebook and start typing messages into it, (accidentally texting a different friend instead of her,) and so I manage to tell her how I'm feeling. Eventually, she asks, "Do you need real food? All you've had today is cake."

That's when it all started to make sense.

So I asked Jeannie to call my mom, who walked us through dealing with a diabetic low, which was what I thought was happening. (My grandfather typically had highs that crashed into lows, so that was my reasoning. In retrospect, these symptoms could have been highs or lows, hyperglycemic or hypoglycemic.)

We started with a small lick of honey for quick carbs, then swapped to apple slices for carbs with fiber, peanut butter for protein, pecans for everything: protein with fiber and fat. I'd take a small bite of whichever sounded like it would taste the best, then drink water because I was really thirsty, then wait to see what my body needed next. I also made sure to start getting up and moving around a bit once it felt good to, to wake up my muscles and metabolism without exhausting myself.

In total before I started really feeling close to normal, I had probably (least to greatest) 1/4 tsp honey, 1 1/2 slices of apple, 1/3 cup peanut butter, 2/3 cup pecans, and 3-5 cups of water. Time-wise, that was over about another hour and a half.

The next day, I drive from Houston to Corpus Christi, to visit my parents as already planned. They have a spare glucometer waiting for me when I get there, and we decide to start monitoring my sugar, and to do a few tests of my tolerance to it.

*Mostly technical from here on out. This is basically a lab write-up.*

*and why I didn't get one done*

If you go to the doctor asking to be tested for diabetes, they'll typically have you take a blood test (that you can take without fasting) called the A1C test. It returns something corresponding to the average blood sugar you've had over the past 3 months. There were a few reasons we decided to not do that test for me, the most important of which were:

I only started changing my diet for the worse recently, for at most the past month, and mostly this one weekend. If I'm more diabetic now because of diet, then a 3-month average wouldn't be very useful.

If I'm having highs crash into lows like my grandfather, then the average wouldn't be as abnormal as a diabetic's typically would.

After I arrived in Corpus, I spent the rest of the day eating my dad's diabetic diet and monitoring my sugar. That normally means testing right when you wake up, right before eating, and 2 hrs after eating. (The wake up reading checks for how you were in the night, and you wait 2 hrs after eating because everyone has blood sugar go higher within 1 hr, but a diabetic's sugar level stays high or goes down very slowly.)

My readings seemed pretty normal, (morning and before eating in the 70-100 mg/dL range, after eating only 20 mg/dL higher,) and I was feeling much better, so the next morning we did a first (very informal) glucose tolerance test.

Very informal in that we just fed me as many pancakes with syrup as I could eat, without counting carbs, really. Trying to re-create the cake incident, you see.

I felt the same kind of really bad as I had after the cake within an hour, and my reading had gone from 87 to 154, which is high, but within normal. (I took it early since I was feeling so bad.) Taking a reading every hour, the next four readings were all within 5 mg/dL of 120 mg/dL, which is within typical error of the measurements (given by the device manual) of the same value, so I interpreted that as "not going down". (I also still felt really bad.)

Finally, I rode on a stationary bike for 10 mins, drinking a lot of water. I soon felt better and had a reading of 96 mg/dL, and it stabilized after that. (The next meal didn't cause a huge spike.)

First, the readings aren't that unusually high, so I'm probably not diabetic.

However, I felt as bad as I had before *exactly* when my blood sugar was high, and it was unusual that my body wasn't able to metabolize the last bit of sugar without me deliberately exercising. Particularly concerning is that had I not taken a reading to know I was high, I would have just gone to sleep from feeling exhausted, same as before, and just stayed high instead of getting better.

Also, I felt the same as before when I'm pretty sure I had a low (given that every bit of food brought me closer to feeling normal). So it seems like either the cake case was a low that feels exactly like a high and I need a glucometer to tell highs and lows apart, or it was a high that I dealt with well enough by being a little active and eating so slowly that it wasn't a problem.

So it definitely seems like I'll need to have a glucometer on hand and keep my diet low-carb regardless. And I don't want to do any more tests of my glucose tolerance if I can avoid it. Feels baaaaad.

*My dad (Darwin) looked up specifications for a standard glucose tolerance test and we decided to both do it to be able to compare our results formally.*

The first main standard here that wasn't in the first test is regulating what you ingest. This time, I drank a Fanta, which has exactly the 75 g of sugar that would be used in a lab. (Dad drank a 7up that had 64 g of sugar, which is close enough.)

The second main standard is that there's documented "normal" values to go by when ingesting 75 g of sugar: Starting normal is 60-100 mg/dL, one hr after can be anything under 200 mg/dL, and two hrs after should be under 140 mg/dL, where 140-200 mg/dL is "prediabetic" (meaning not diabetic, but susceptible to becoming diabetic).

Dad napped throughout his test (setting phone alarms to take the readings), while I was awake but felt exhausted and terrible again. Our numerical results, plotted alongside normal and prediabetic ranges, were:

My numbers were within normal range, though my 2 hr reading was borderline prediabetic. Given that proximity and how bad I still feel during these tests, keeping my blood sugar under control with a low-carb diet still seems to be important for me.

For context, my dad (Darwin) has been diagnosed diabetic in the past, but is now controlling his symptoms with diet and exercise. So his numbers being normal is not surprising. In fact, I take this to mean that if I stick to his diet, I should be able to get closer to his curve the next time I take this test (which will be at least six months from now).

Having maintained a low-carb diet so far with Soylent, my usual bacon/sausage and eggs breakfast, nuts for snacking, and whatever vegetables-and-meat dish I feel like for dinner, I've been feeling way better in general than I have been the weeks prior. I don't even feel the need to check my sugar level most days! (And if I do have a heavy carb meal, like Christmas dinner, then soon after eating the bad feelings settle in again, and immediate exercise helps dramatically.) Looks so far like I have a good plan.

]]>*Things necessary to get started, that you may do slightly differently if you're not*

*Things necessary to get started, that you may do slightly differently if you're not using Google Calendar or wanting to use the python icalendar package instead.*

I used Google Calendar's `Settings → Import & export`

feature to export all my calendars, then extract from the `.zip`

file the particular `.ics`

file for my tutoring calendar. I renamed the file to something easier to type, `tutoring.ics`

.

I'm using python 3.0, which mainly means you'll see `print("hello")`

in my code instead of `print "hello"`

when I want to run that command. I installed the python `ics`

package to parse the file with.

*The trial-and-error before I had solid code. Skip if you just want the answer below!*

I started by checking the ics page to see if it had any good documentation. It focused on creating calendars from scratch and downloading directly from the internet, neither of which I cared about, so I gave up on that.

Then I opened a python interpreter and ran:

```
import ics
```

then typed `ics.`

and hit the `tab`

key twice. For me, that listed off the functions in the `ics`

package. (I could have just as easily run `help(ics)`

or `dir(ics)`

to have similar but different effects.) The function `ics.events_in_year`

looked like what I wanted, so then I ran:

```
help(ics.events_in_year)
```

and saw that it wanted two arguments, a `filename`

and a `year`

, which I assumed were supposed to be a string and an int. So then I ran:

```
ics.events_in_year("tutoring.ics", 2017)
```

which spat out an awful mess of data, so I captured that with:

```
data = _
```

which stored that last output as `data`

. The awful mess looked like a list because it ended with `]`

, so I checked what the first element looked like with:

```
data[0]
```

That looked like a dictionary since it ended with `}`

, and had two notable entries I could spot at a glance: `'DTSTART': '20170125T210000Z\n'`

which

looked like the start date-time, and `'Summary':`

which had the name of a student of mine. So then I started making a for loop to return names with:

```
for datum in data:
print(datum["SUMMARY"])
```

which returned a mess of names, with a bunch of extra space because of the final `'\n'`

newline character in each `SUMMARY`

. I also noticed (as expected) so many duplicate names, which isn't helpful. To fix those things:

```
names = set() # like a list without order and no duplicates
for datum in data:
name = datum["SUMMARY"][:-1] # the last part gets rid of the last character, the \n newline
names.add(name) # add this name to the set of names
for name in names: print(name)
```

That looked pretty good, but some names were old, from the spring or summer, when I only wanted the fall semester, so I started trying to select the names after August. First I ran `data[0]["DTSTART"]`

to see what the date looked like again, then tested the slice `data[0]["DTSTART"][4:4+2]`

to get the month (`4`

to skip the 4 digits of year, `2`

to include the next two digits). Then I tried to run:

```
names = set()
for datum in data:
month = int(datum["DTSTART"][3:3+2]) # casting to int just in case
if month <= 8: continue # skip anything August or earlier
name = datum["SUMMARY"][:-1]
names.add(name)
for name in names: print(name)
```

but an error came up! `KeyError: DTSTART`

. Apparently some of the events don't have a start date? That didn't seem right. So I ran:

```
for datum in data:
if "DTSTART" not in datum: break
datum
```

to find the first event without `DTSTART`

and look at it. Turns out, some of the events have a `DTSTART;TZID=America/Chicago`

entry instead, to show the time zone the date is in. That's cool; now I just have to first find a key that begins with `DTSTART`

before I get the start date:

```
names = set()
for datum in data:
for key in datum:
# stop at the first key starting with DTSTART
if key.startswith("DTSTART"): break
month = int(datum[key][3:3+2]) # use that key, whatever it is
if month <= 8: continue
name = datum["SUMMARY"][:-1]
names.add(name)
for name in names: print(name)
```

*What works.*

From within the directory with the `tutoring.ics`

file, either open a python interpreter and run the following code, or copy-paste it in a `.py`

file and run it. If those words make no sense, look over this tutorial.

```
import ics
data = ics.events_in_year("tutoring.ics", 2017)
names = set()
for datum in data:
for key in datum:
if key.startswith("DTSTART"): break
month = int(datum[key][4:4+2])
if month <= 8: continue
name = datum["SUMMARY"][:-1]
names.add(name)
for name in sorted(names): print(name)
```

Then next time I need this script, I'd like to modify it so that it truncates everything after the second word of a name, so that the ones where I added some notation in the `SUMMARY`

after the name don't count as separate names.

Also, instead of simply including every name that was in September or later, I'm thinking I'd like it to make a dictionary of `name: month`

where month is the latest month where `name`

shows up. Then I can see where it seems like the prior semester ended, and select from those the names for this semester.

The one thing it doesn't have is a list of topics, which is really unfortunate since it crams so much into 20 minutes that Alain never slows

]]>The one thing it doesn't have is a list of topics, which is really unfortunate since it crams so much into 20 minutes that Alain never slows down enough to tell you what he's going to tell you.

That's what this document is: a list of his topics, with time stamps to follow along with. Be sure to notice the handy table of contents (perhaps by clicking the "show" button at right) that goes along with this page.

(0:03) He wrote an essay for

*The New York Times*under this self-described "dramatic" title.(0:19) He takes a poll and about 30 respond as having married the wrong person.

(0:40) The objective of this talk is to take the anger people privately direct towards their love lives and turn it into sadness.

(1:12) Optimism/hope is necessary for anger.

(1:58) "Vast industries" build up our hope; in contrast, this talk will gently let you down.

- (2:33) Aside: "It's not that bad" because you're likely to marry a "good enough" person.

(3:00) We are all strange and hard to live with, and we don't know much about ourselves in that way.

(3:47) There is a wall of silence around this strangeness, as others know much about your flaws.

(4:53) Many of us are addicts in order to avoid spending time with ourselves.

(5:53) "Until you know yourself, you can't properly relate to another person."

(5:59) Love requires us to express dependence and vulnerability, which we don't want to do.

(6:19) Psychologists describe two patterns of behavior when there is a risk of being vulnerable, exposed.

(7:34) This is too humbling to say: "Even though I am a grown person, I need you like a small child needs a parent."

(7:57) "In short, we don't know how to love." Love is a difficult skill we need to learn.

(8:24) There is a distinction between loving and being loved, which we have much experience with.

(8:54) The core of love is to have the willingness to interpret another's behavior and find benevolent reasons.

(9:34) "Anyone that we can love is going to be a perplexing mixture of the good and the bad."

(9:42) Melanie Klein argued that infants don't recognize the good and bad things as coming from the same parent, until they are about 4 years old and are able to be ambivalent.

(10:52) "Everyone who we love is going to disappoint us. [...] Maturity is the ability to see that there are no heroes or sinners, really, among human beings, but all of us are this wonderfully perplexing mixture of the good and the bad."

(11:33) We're told to follow our instinct, heart, feelings, to stop reasoning, analyzing.

(11:58) "You can't think too much, you can only ever think badly."

(12:16) The way we love is built upon childhood experiences, where love is bound with suffering.

(12:58) When we start to choose love partners, "We are on a quest to suffer in ways that feel familiar."

(13:18) This is why we can date someone who seems really great in every way while we reject them because they don't excite us.

*This is when Alain assumes you've chosen a good enough partner and provides advice on how to help that relationship grow.*

(14:09) We believe that we won't have to explain who we are or how we feel to the right person.

(14:48) This leads to sulking: refusing to express what's wrong with someone we think has decided to not understand us.

(15:47) "The root to a good marriage and a good love is the ability to become a good teacher."

(16:15) Rather than being tired, frightened, and as a result humiliating, be relaxed and prepared for a lack of understanding.

(16:45) "You need a culture within a couple that two people are going to need to teach each other and therefore also learn from one another."

(16:53) Many people respond to criticism as an attack, but it's not.

(17:08) We tend to believe that love accepts instead of criticises, which is appalling.

(17:39) "Criticism is merely [...] to try and make us better versions of ourselves."

(17:53) The phrase "good enough" was taken from psychoanalyst Donald Winnicott when helping parents.

(18:21) "You cannot have perfection and company."

(18:38) Compatibilty is the achievement of love, never present initially.

(19:03) Learn to respond better to your "types," who you tend to love.

(20:11) Recognize the nobility of compromise.

(20:56) Concluding quote by philosopher Søren Kierkegaard saying that in any case you will have regret, so don't beat yourself up about mistakes.

Having a *solution*

Having a *solution mindset* means describing problems in a way that asks for how to solve them, moving towards a solution, rather than presenting them as irresolvable fact. Since it's pretty obvious that having a solution mindset is necessary for problem solving, the focus here is those weird yet common cases where people don't do it, which I'll call *solution avoidance.*

When someone is described as a complainer, anxious, argumentative, or has a habit of blaming others, they are likely avoiding solutions. It's not wrong to point out problems, or make criticisms. What feels bad is when the problems are described as absolutes, or part of someone's identity, with no effort made to move towards a solution. When effort to approach a solution is made, it is usually appreciated, and it changes the whole tone and eventual outcome to become more positive.

For instance, when I recently was locked out of my apartment, my head was full of (panicked comments,)(panic-bullets) statements of problems without searching for a solution.

(start panic-bullets)

- I don't have my phone!
- I can't contact my roommate!
- I don't know anyone around here!
- Why did my roommate lock the door?!

(stop panic-bullets)

As these thoughts seeped in, I recognized the solution avoidance, paused, took a few deep breaths, and told myself to have a solution mindset. Then instead of irrefutable statements, (I asked myself questions, and had a conversation.)(solution-bullets)

(start solution-bullets)

How can I contact someone without my phone?

I could borrow someone else's...

How can I contact my roommate?

I may have someone else's phone number memorized who knows him...

Does anyone around here know my roommate?

We did visit some neighbors once who know his phone number, and he knows someone who works at the Panera down the street...

(as I was walking towards a solution) How can I prevent this in the future?

I could memorize my roommate's phone number, and give it to people whose phone number I already have memorized...

(stop solution-bullets)

Eventually, I walked over to Panera, had someone who worked there contact the person we knew (who wasn't working that day), who then met me there and helped me get in touch with my roommate. Later, I made sure my family had my roommate's phone number so I could contact them as well if I ever forgot it.

There are many typical situations where a solution mindset would help. Here's a few solid examples:

(Anxiety.)(anxiety)

(start anxiety)

Colloquial anxiety, or commmon "worrying about problems" can become solution avoiding if it's pure worry. Consider The School of Life's response.

The clinical concept of anxiety features patients who "dislike uncertainty and unpredictability," feeling unable to search for resolution. Consider the ADAA's description.

(stop anxiety)

(Road Rage.)(road-rage)

(start road-rage)

It's easy to feel anger while driving at the other drivers or just rush hour traffic in general. Feeding that anger is solution avoidance, when you don't ask what you can do to improve your experience.

Consider how to share the road with raging drivers from two very different perspectives, a defensive driving expert and a motorcyclist.

(stop road-rage)

(Regret.)(regret)

(start regret)

Regret is both what motivates one to become better, or what cripples one to focus on a problem that can have no solution: the past. Consider Vsauce's take.

(stop regret)

(Game Toxicity.)(game-toxicity)

(start game-toxicity)

In general, there is an atmosphere for many video game communities where players find themselves unable to work together, often culminating in rage quitting. Consider Extra Credit's take.

For specific games, the issue becomes more focused by certain subcommunities or game designs, and advice can become more specific. Consider Heroes Academy discussing Heroes of the Storm.

(stop game-toxicity)

("Don't talk about Politics and Religion.")(politics-religion)

(start politics-religion)

These two topics so often devolve into argumentation that perpetuates itself, each side repeating the stance of "You should just agree with me because..." This often arises from false dichotomies.

For politics, consider the middle ground between "Liberal vs Conservative," and for religion, consider the middle ground between "Science vs God."

Note that deciding to avoid the topics in social situations is a solution to the problem "We're not enjoying our time together." but is solution avoidance when the problem is, "We can't seem to find common ground."

(stop politics-religion)

*Is a solution mindset bad in some cases?*

Unfortunately, there are times when the problem at hand simply is unsolvable. This, however, is not a time to dispose of your solution mindset. Instead, you have identified that your perception of the problem is too vast or specific, too demanding, and it is once again time to take a step back and reevaluate. It may be time to dispose of your main goal, or simply set it aside and make subgoals.

For instance, when I ask myself "Why can't I work on anything today?" I typically take a step back and ask instead, "What am I considering work today?" or "How can I encourage myself?" For another case, when people say they "want to solve world hunger" they typically have set a direction to head towards, with the majority of their efforts solving smaller problems that support that potentially unachievable goal.

]]>]]>Holding a dish with one hand

and scrubbing with the other

reminds me of the necessity of stability

and action.The initial addition of water

reminds me of the necessity

of preparation.The addition of soap reminds me

of the necessity of an agent.The final addition of water

reminds

Holding a dish with one hand

and scrubbing with the other

reminds me of the necessity of stability

and action.The initial addition of water

reminds me of the necessity

of preparation.The addition of soap reminds me

of the necessity of an agent.The final addition of water

reminds me of the often-forgotten

need for conclusion.The placement on a rack for drying

reminds me of the need

for patience.Finally...

All of this...

all these steps together,

as one procedure,

automatic, methodical,

reminds me of the potential

efficiency and capability

of true restorationSo that I, in this process,

am somewhat restored and reformed into a new

familiar vessel.

Halfway through today, I felt the need to check in with myself.

That is, I was starting to feel like what I was doing was no longer

rewarding, and that when I tried to choose something to do next,

I felt lost in too many options that had too much need to be done.

So I began a self-reflection, declaring that I would respect myself,

lean into my edge, check my assumptions, and truly be present,

so that I could really delve in and decide what I wanted out of today.

And when I started asking myself, what am I doing, and why,

I found myself washing a single dish while reflecting, without having deliberately chosen to do so, and conversed:

*What are you doing?* Washing a dish.

*Why?* Because it reminds me of the possibility and power of restoration.

*Why?* — to which I wrote a poem.

In each definition, the absolute value function is a [real function][(\(\mathbb{R}\to\mathbb{R},\) meaning that for every real input there is exactly one real output)] that I denote with \(\DeclareMathOperator{\absop}{abs}

\newcommand\abs[1]{\absop\left({#1}\right)}\abs{x}.\)

*This is the geometric definition, the intuition and the purpose.*

\(\abs{x}\) represents the distance between \(x\) and

]]>In each definition, the absolute value function is a [real function][(\(\mathbb{R}\to\mathbb{R},\) meaning that for every real input there is exactly one real output)] that I denote with \(\DeclareMathOperator{\absop}{abs}

\newcommand\abs[1]{\absop\left({#1}\right)}\abs{x}.\)

*This is the geometric definition, the intuition and the purpose.*

\(\abs{x}\) represents the distance between \(x\) and the origin. This is equivalent to the definition that \(\abs{x-y}\) represents the distance between \(x\) and \(y\). (Why?)(why-equiv-distance)

(start why-equiv-distance)

We will show equivalence by first assuming the distance definition for \(\abs{x}\) and deriving that of \(\abs{x-y}\) from it, and second by assuming the definition for \(\abs{x-y}\) and deriving \(\abs{x}\) from it.

Going from \(\abs{x}\) as the distance between \(x\) and the origin \(0\) to \(\abs{x-y}\) is a matter of shifting to the right by \(y.\) This means that \(\abs{x-y}\) as a function of \(x\) feels like \(\abs{x}\) if the origin were at \(y\) instead of \(0,\) and thus we have \(\abs{x-y}\) as the distance between \(x\) and \(y\) instead of \(0.\)

Going the other direction is easy: Simply let \(y=0,\) and then the distance \(\abs{x-y}\) between \(x\) and \(y=0\) becomes \(\abs{x},\) and so \(\abs{x}\) must be the distance between \(x\) and the origin.

(stop why-equiv-distance)

*This is the definition most courses use, the most direct for computation.*

When computing the value of \(\abs{x},\) if \(x\) isn't negative, leave it alone; if it is, swap it to positive. In precise notation, that means:

\[ \abs{x} = \begin{cases} x & \text{if $x\ge 0$} \\ -x & \text{if $x< 0$} \end{cases} \]

*This is a convenient trick.*

For calculators that don't have an absolute value, you can use \(\abs{x}=\sqrt{x^2}\) to compute it. This is also surprisingly useful for proving a few properties, especially for understanding why the solution to \(x^2=4\) is \(x=\pm 2.\) (Why?)(why-square-root-solution)

(start why-square-root-solution)

After taking the square root of both sides of \(x^2=4,\) we get \(\sqrt{x^2}=2.\) Many students try to simplify \(\sqrt{x^2}\) to \(x,\) but as mentioned above and proven below, \(\sqrt{x^2}=\abs{x}\) instead. So we have \(\abs{x}=2,\) which is easy to solve using one of the other definitions to get \(x=\pm2.\)

(stop why-square-root-solution)

*How do we know these definitions are interchangeable?*

The first two definitions are equivalent somewhat by fiat: we define the distance of a nonnegative number from the origin to be the number itself. It then makes sense that the distance for a negative number be the opposite of itself, since you've gone that far in the negative direction.

This square root definition works because the square root function is actually the *principal* square root function. (That is...)(that-is-principal-square-root)

(start that-is-principal-square-root)

When looking for a square root of \(y,\) you want to find an \(x\) for which \(x^2=y,\) and when there is a solution, there's usually two, since \((-x)^2=(-1)^2x^2=x^2.\) When there's a positive and a negative choice for square roots, the principal square root function \(\sqrt{y}\) is conveniently defined to return the positive choice.

(stop that-is-principal-square-root)

So that means \(\abs{x}=\sqrt{x^2}\) is the same as saying \(\abs{x}^2=x^2\) together with \(\abs{x}\ge 0.\) When \(x\) is also not negative, this is the same as \(\abs{x}=x,\) and when \(x\) is negative, it is \(\abs{x}=-x,\) so this definition is equivalent to the piecewise definition.

]]>After finishing my undergrad studies three years ago, I took a break from schoolwork to teach at a private school, tutor

]]>After finishing my undergrad studies three years ago, I took a break from schoolwork to teach at a private school, tutor on the side, and pursue my other interests like playing violin. That went well, but lately I've been getting antsy, wanting to return to higher math in full force. I know I'm a little rusty, and I also know that there's some textbooks that I've always been meaning to get around to that will likely fall by the wayside if I were to attend school again now. So over winter break, I decided it'd be a perfect time to really go all in on self-teaching this spring semester, and design some full courses to manage myself.

**Algebra.** The first course would use the Algebra text by Dummit and Foote, designed to shake off my rust, since Algebra is probably what will be my specialization, and the first two parts of the text should be a problem-driven review. So this course would be focused on working problems, only occasionally skimming the main text.

**Knot Theory.** The second course would use The Knot Book by Colin Adams, a text which I'd been wanting to go through ever since my first course in knot theory five years ago, when we had used Knot Theory by Charles Livingston and I was thoroughly disappointed by how it covered the material. After doing searching for other texts at the time, I felt that Adams's would be a much better style for me, and I bought it immediately. I used it to teach the first two chapters of material to a high school student, but otherwise, it's simply remained on a bookshelf unread. This course would be focused on reading through the second half of the book, working the problems that interest me most.

**\(p\)-Adic Numbers.** The third course would be focused on understanding \(p\)-adic numbers, which I've long found interesting, but never gotten around to actually making sense of. When I tried to understand them three years ago, I was reading through Fernando Q. Gouvêa's book at Fondren Library. I recently visited Fondren again, found the book, and another by Alain M. Robert that seemed closer to what I needed. With two good texts, this course would focus on reading through the material, typically making up my own problems to try to make my confusions precise and resolve them.

**Topics.** The fourth course would be centered around all my extra math work. See, I get a lot of random ideas and problems to solve all the time, and I am very disorganized at dealing with them in any reasonable way. I typically work on whichever one happens to be on my mind, writing notes on random pieces of paper that get lost, if I write anything down at all. That... sucks. I also want to try to start staying in touch with new math papers as they're published, incorporating that material in with these other problems I'm working on. So resolving these two issues would be the focus of this course.
(Anecdote...)(anecdote-weathers)

(start anecdote-weathers)

I remember about a decade ago, I asked a physics professor (Dr. Weathers) a question that reminded him of previous work, and he opened a file cabinet and quickly located a file containing a napkin with perfectly relevant notes on it. That moment always stuck with me as the kind of thing I'd like to be able to do some day, though I've never been one to deal well with physical organization. The trick for me would certainly be to require my notes to become electronic immediately as they are written, but I still don't have a good system of organization for the content even once it's saved.

(stop anecdote-weathers)

I've tried to deal with all of these sorts of things in the past, and failure seemed swift with every attempt. I decided this time to really make it feel like school again, and in the first week, wrote up syllabi for each course, set assignment structures, test dates, everything. I also set weekly "class time" events when homeworks would be due and so on. This has worked *really well* at helping me form the habit of class for a few reasons:

I've often failed on previous attempts because I'd miss my own deadlines in the first week or two, and then have no good way to scale up to what I wanted. Having a syllabus week started [the habit][(remembering when the deadlines are)] while having necessary yet achievable, low-pressure tasks to complete.

Having a week of accomplishing something at each deadline gave me a better feeling for how much to include in homework assignments.

Putting my plans down in writing made it feel truly real and forced me to know if I was sticking to them. Previous attempts of just vaguely saying "I'll do stuff every week" made me able to wiggle around what I initially had in mind, and ease up on myself unnecessarily.

The other largest hurdles in course design in the past have been motivating homework and creating exams. I'm always motivated to work problems conceptually, but actually going through the effort of writing the answer down or trying to work a quantity of problems rather than my favorites have both been difficult. For exams, it always feels like I have to invest so much time in coming up with a good selection of problems, and it always pans out that I've chosen too many problems for the exam, so much of my effort was in a sense a waste.

This time, I realized that I could solve my homework and exam woes simultaneously with the same solution. When I choose my homework problems from the text, all those that I don't work on go into a test pool. When test day arrives, I run a python script that randomly selects problems from the pool and writes the exam for me, which I must try to complete in a limited time. Thus, my time spent choosing homework and exam problems is efficiently merged, and I'm strongly motivated to work the harder problems for homework, as I wouldn't want them for the exam. Finally, any problem may be chosen for the exam, so even the ones I find tedious to write out may have to be written out.

I knew that however I wrote up my syllabi to begin with, they would probably have to be changed mid-semester. Some of them even explicitly stated which parts should be considered first for modification. To aid in my analysis, I made sure to time journal, recording in Google Calendar the time I spent working on each subject.

After four solid weeks of work, I found myself burnt out. (Acquiring a knee injury and having an almost-sick weekend certainly didn't help.) So, it was clear that my fifth week plans would be replaced with "take a break and make modifications".

The most stunning result of my analysis has been that I was spending about as much time per week on algebra as I was on knot theory. (They were 11 and 10 hours per week, respectively.) This is stunning to me because I *knew* I was spending *way* too much time on algebra, but felt that my time spent on knot theory was much less and more appropriate. (Why?)(why-time-lost)

(start why-time-lost)

My best guess for why the disparity between my expectation and reality exists is that my algebra work has been mostly about concepts I'm already rather familiar with, so most of my time is spent writing the proofs. On the other hand, my work in knot theory has been in less familiar territory, so most of my time is spent devising the proofs. Apparently, my ability to estimate time spent pondering is completely bunk.

(stop why-time-lost)

So, I have decided to change to a maximum time limit rather than a minimum work limit. That is, I will strive for similar goals on homework each week, but after 5 hours of work in a week on a particular subject, I am not to work any more. This should work well with my current homework/exam system, since any problems I don't get to finish as homework problems are then put in the test pool, so I have the same incentive to work as before.

In order for this change to work, I'll have to change how I manage my time a bit. My plan is to set alarms to check on my progress as I'm working. One plan is to set an alarm for half the time I intend to work in a given session. That way, at the halfway point, I can evaluate how I've spent my time so far, and possibly change my efficiency for the final half. Alternatively, if I have many tasks to do in a session, I may set many smaller alarms, each for the amount of time I want to spend on each task. We'll see.

]]>Netrunner is a two-player asymmetric card game modeled in deck design after trading card games, modeled in theme and flavor after William Gibson's universe, such as Neuromancer, and unique in its gameplay.

One player is a **Corporation,** while the other player is a **Runner.** The **Corp** is trying to complete 7 points worth of *agendas,* by advancing them, while the **Runner** is trying to steal 7 points worth of *agendas,* by accessing each in a run on the **Corp.**

The **Corp** can also lose by drawing when the **Corp** deck has no cards, and similarly, the **Runner** can lose by discarding when the **Runner** hand has no cards.

To set up the game, each player gains five credits, shuffles their deck, draws a (maximum-size) hand of five cards, and has exactly one chance to mulligan for a reshuffled hand of five cards. The **Runner** begins with 4 MU, which is not denoted in any way.

The **Corporation** has multiple lanes or columns of vulnerabilities for the **Runner** to run. The **Corp** defends these lanes by placing *ice* in them, which the **Runner** will have to encounter before reaching and ultimately accessing the vulnerability. The **Corp** has three permanent vulnerabilities called *central servers,* each with their own lane:

- HQ (the hand, denoted by the identity card)
- R&D (the deck)
- Archives (the discard pile)

These lanes are already occupied by a vulnerability, and thus can only have ice or *upgrades* installed on them. The other lanes are *remote servers,* each of which can have at most one agenda or *asset* card installed, in addition to any ice or upgrades. There are no other limits on servers. (For example...)(no-server-limits)

(start no-server-limits)

Any number of ice or upgrade cards may be installed on each server, even remote servers with no agenda or asset installed. Any number of remote servers may exist.

(stop no-server-limits)

The **Corporation** installs cards on servers face-down. The **Corp** may look at the face-down cards that are not in [R&D][(the deck)] at any time during the game, while the **Runner** may not. Both players may look at any face-up cards at any time. Cards not in the [archives][(the discard pile)] are face-up only when rezzed; cards in the archives are face-up only when the **Runner** has looked at them before, either because they were face-up when [trashed][(put in archives)] or because they have been accessed.

Face-down cards in the archives are turned [sideways][(landscape/horizontal)] to ensure the **Runner** knows they are there. Installed ice is turned sideways and separated to ensure the **Runner** knows how many pieces of ice defend each line and in what order, even while some may still be unrezzed. All other face-down cards are deliberately placed [normally][(portrait/vertical)] so the **Runner** cannot tell what kind of non-ice cards they may be.

The **Runner** also has special names for the basic regions:

- grip (the hand)
- stack (the deck)
- heap (the discard pile)

The **Runner** has the ability to play cards into the *rig* (in play), representing what he has available to run with. There are three rows in the rig:

- programs, which require MU to install and can contain icebreakers
- hardware, which are the basic rig cards
- resources, which can be [trashed][(put in the heap, the discard pile)] by the
**Corp**if the**Runner**is tagged

Unlike **Corp** cards, all the cards in the **Runner**'s [rig][(in play)] and [heap][(the discard pile)] stay face-up, available for both players to view at any time.

The only kind of card not mentioned so far are those that are [trashed][(put in the discard pile, either archives or heap)] when played. For the **Corp,** these are called *operations,* and they go directly to the archives. For the **Runner,** these are called *events,* and they go directly to the heap.

When viewed in the hand, all cards are designed to be read [normally,][(portrait/vertically)] so all of the positions described below assume this normal orientation.

The cost in *credits* to activate a card is on the top-left of the card. For operations and events, this is the cost to play and trash it. For the **Runner**'s programs, hardware, and resources, this is the cost to install the card on the [rig.][(in play)] For the **Corp**'s ice, upgrades, and assets, this is the cost to rez a card that is already installed on a [server.][(in play)]

The *strength* of ice or programs is on the bottom-left of the card. A program's *icebreaker* cannot be used on a piece of ice unless the strength of the program meets or exceeds the ice. Any boosts to a program's strength only apply while encountering one piece of ice, after which it resets to the strength on the program's card for the next step of the run.

Icebreakers typically only cancel one ice *subroutine* at a time. Each ice subroutine starts with "↳" symbol.

Icebreakers typically can only cancel one subtype of ice. The subtype(s) of the ice is on the left of the card. There are three subtypes of ice: *sentry,* *barrier,* and *code gate.*

Agendas have their *agenda points* on the middle-left of the card. These are the victory points for the game: for the **Corporation** if the card is advanced to completion, or for the **Runner** if the agenda is stolen. The number of advancement tokens required for scoring an agenda for the **Corp** is on the top-right of the card. Agendas are only activated when scored.

Upgrades and assets are also vulnerable to **Runner** access. When accessed, the **Runner** may choose to *trash* (put in archives) the card by spending credits equal to its *trash cost,* which is on the bottom-right of the card text.

Each player starts their turn by performing actions, and ends their turn by *discarding* ((not trashing!Discard.)(actions-not-trashing)) down to their maximum hand size (five cards at the start).

(start actions-not-trashing)

It has the same effect as a [trash,][(put the card in the discard pile)] but it has a different name so it cannot be prevented by card abilities which prevent trashing. The same is true when the **Runner** takes damage.

(stop actions-not-trashing)

Each player typically performs four actions in a turn. The **Corp**'s first action must be to draw a card; this is shown by the **Corp** only receiving three *clicks* to spend on other actions. The options are:

- Draw a card.
- Play a card.
- Gain a credit.
- Spend a credit to add an advancement token to a card.
- Spend two credits to trash a resource if the
**Runner**is tagged. - Spend two additional clicks to remove all virus counters from all cards.

Cards are drawn from [R&D][(deck)] into [HQ][(hand)] and played from HQ onto [servers][(in play)] (installing) or [archives.][(discard pile)] (for operations)

Agendas and any asset that says it can be advanced are the only cards that can be advanced. After advancing an agenda, check if it met its advancement requirement and can be scored.

**Corp** cards are always played on [servers][(in play)] face-down and inactive. Non-ice cards are simply stacked face-down in the server, while ice is played in the landscape or horizontal orientation in front of any ice or server
contents.

There is typically no install cost, except when playing ice on a server that already has ice on it, in which case the **Corp** must spend one credit per piece of ice already installed on the server. To avoid this cost, the **Corp** may choose to [trash][(put in archives)] any ice as he installs new ice on the same server.

Similarly, the **Corp** may choose to trash an agenda or asset when installing a new agenda or asset on the same server, since only one can be on a server at a time. These are the only times such trashes can be made.

To make an agenda active, it must be scored by advancing it to completion. To make any other card active, it must be rezzed.

The **Runner** receives four *clicks* to spend on actions each turn. The options are:

- Draw a card.
- Play a card.
- Gain a credit.
- Make a run.
- Spend two credits to remove a tag.

Cards are drawn from the [stack][(deck)] into the [grip][(hand)] and played from the grip onto the [rig][(in play)] (installing) or [heap.][(discard pile)] (for events)

**Runner** cards are always played on the [rig][(in play)] face-up and active, requiring the activation cost to be paid in credits immediately.

After installing a program, the **Runner** must [trash][(put in the heap)] programs until there are enough *MU* (memory units) for all of the programs on the rig. The **Runner** starts with a base 4 MU, which can be altered by cards in the rig. The MU required for each program is on the top-left of the card, just to the right of the activation cost.

When the **Runner** initiates a run, he chooses which server to run and *approaches* the outermost ice. On any approach, the **Corp** may then rez the approached ice (and any non-ice cards), and if so, the **Runner** *encounters* the ice, using icebreakers and applying ice subroutines (↳) that are not broken in order. After encountering the ice, or *passing* it if it remained unrezzed, the **Runner** decides whether to approach [the next thing in line][(either ice or server contents)] or jack out and end the run *unsuccessfully*. After approaching all ice in order, the **Runner** may choose to approach the server contents, making the run *successful* and *accesssing* the server. As with approaching ice, the **Corp** may rez any non-ice cards after this approach, before the resulting access.

When a remote server is accessed, all its [contents][(upgrades and at most one asset or agenda card)] are revealed to the **Runner,** who may then *steal* (score) any agenda and choose to pay the trash cost of

any upgrades or asset. Each of these cards is accessed one at a time, in whatever order the **Runner** chooses.

Each central server has a unique effect when accessed:

- HQ: The
**Runner**accesses a card chosen at random from the hand. - R&D: The
**Runner**accesses the top card from the deck. - Archives: The
**Runner**accesses all cards from the discard pile.

When R&D is accessed, the **Corp** does not get to look at the accessed card unless it gets stolen or trashed. When Archives is accessed, all cards are turned face-up, and no cards may be trashed. Each card is still accessed one at a time in all cases.

Card text always takes precedence over game rules when there is a conflict.

A card being *active* means its text/abilities take effect; *inactive* means the opposite. To make a card active is to *activate* it.

A **Corp** card is *rezzed* when it is face-up and active. Becoming rezzed requires a cost in credits, and can only occur after installing. Non-ice can be rezzed immediately before almost any game step completes; ice can only be rezzed when approached by the **Runner.**

When a card ability says to *expose* a card, reveal it to all players and return it to its previous state. This does not count as an access.

Some cards become *hosted* by other cards. Such cards are [trashed][(put in the discard pile)] when the host card is trashed.

When a card ability has a number of credits with an arrow (↶) over the credit symbol, it means the card comes into play with those credits on it, then replenishes those credits every round, at the start of the card owner's turn. The card will typically restrict how those credits can be used.

(Note: If the **Runner** plans to play with the Noise deck...)(noise-note)

(start noise-note)

Only one copy of Wyldside may be in play at a time. This is denoted by the diamond before the title. (Other core set cards have this diamond, but Wyldside is the only one with more than one copy.)

(stop noise-note)

*Credits* are the currency of the game, which come in 1's and 5's. The backside of the 1-credit is the *advancement token,* gained as a **Corp** action to try to complete agendas and win the game.

The red brain counters denote brain damage.

The blue/green rectangular counters are for tagging and bad publicity.

The blue/red circular counters are versatile. For instance, they may be blue **Corp** *agenda counters,* red **Runner** *virus counters,* or miscellaneous *power counters* for either player.

When the **Runner** takes *meat damage* or *net damage,* discard (not trash) a card randomly from the [grip.][(**Runner** hand)] (These damages have different names so that card effects can prevent only one or the other.) This can trigger a **Corp** win immediately if the grip was already empty.

When the **Runner** takes *brain damage,* discard (not trash) a card randomly from the grip, and reduce the **Runner**'s maximum hand size by one, taking a brain damage counter to denote this. This can trigger a **Corp** win immediately if the grip was already empty, or at the end of the **Runner**'s turn if the maximum hand size was already zero.

The blue side of the rectangular counter is used for tags, the green side for bad publicity.

Any time a card has an effect for the **Runner** to gain a *tag,* an additional tag is given to the **Runner.** This allows the **Corp** to perform extra actions, such as the default "spend a click and two credits to trash a resource" or the neutral agenda Private Security Force ability "spend a click to deal one meat damage".

Any time a card has an effect for the **Corp** to gain *bad publicity,* an additional bad publicity counter is given to the **Corp.** The **Runner** gains one credit for every bad publicity counter at the start of each run, but these credits are returned when the run ends if they are not spent.

Some **Corp** cards have a "Trace\(^n\)" ability, which begins a *trace.* In a trace, the **Corp** begins with strength \(n,\) while the **Runner** begins with strength equal to *links,* (◰) found on the identity card and sometimes on cards in the [rig.][(in play)] Then the **Corp** openly spends some number of credits, increasing strength by that amount. Then the **Runner** does the same. The trace is successful if the **Corp** has greater strength, unsuccessful if the **Runner** has equal or greater strength.

Most probabilities are not independent, which means they have some sort of correlation. (Examples...)(correlated-examples)

(start correlated-examples)

This US presidential election year, we have Trump as the Republican candidate and Hillary as the Democratic candidate. The chance that Hillary wins has a negative correlation with the chance that Trump wins, since knowing that one has won tells you that the other winning is less likely. Similarly, the chance that Hillary files an executive order has a positive correlation with the chance that Hillary wins the election, since knowing Hillary has become president tells you that filing an executive order is more likely.

(stop correlated-examples)

So, independent probabilities are not like those, but are instead entirely unrelated, with no correlation whatsoever. (Examples...)(independent-examples)

(start independent-examples)

The best examples often sound completely off the wall, like the chance of Trump winning the presidential election being independent with the chance you will eat soup this week. Another could be the chance that you roll an even number on a die being independent with the chance that I find a penny on the ground this month.

(stop independent-examples)

So now comes the interesting part. If you have two independent events \(U\) and \(V\), with probabilities \(P(U)\) and \(P(V)\), what's the chance of both occurring, [written \(P(U\cap V)\)?][(said "the probability of the intersection of U and V" or "P of U cap V")]

You may think that the independence of \(U\) and \(V\) means we know very little about how they are related, but it's actually the opposite: We know they are completely unrelated. This unrelatedness means there must be a (fixed value for \(P(U\cap V),\)!Fixed Intersection.)(fixed-intersection) and further,

]]>Most probabilities are not independent, which means they have some sort of correlation. (Examples...)(correlated-examples)

(start correlated-examples)

This US presidential election year, we have Trump as the Republican candidate and Hillary as the Democratic candidate. The chance that Hillary wins has a negative correlation with the chance that Trump wins, since knowing that one has won tells you that the other winning is less likely. Similarly, the chance that Hillary files an executive order has a positive correlation with the chance that Hillary wins the election, since knowing Hillary has become president tells you that filing an executive order is more likely.

(stop correlated-examples)

So, independent probabilities are not like those, but are instead entirely unrelated, with no correlation whatsoever. (Examples...)(independent-examples)

(start independent-examples)

The best examples often sound completely off the wall, like the chance of Trump winning the presidential election being independent with the chance you will eat soup this week. Another could be the chance that you roll an even number on a die being independent with the chance that I find a penny on the ground this month.

(stop independent-examples)

So now comes the interesting part. If you have two independent events \(U\) and \(V\), with probabilities \(P(U)\) and \(P(V)\), what's the chance of both occurring, [written \(P(U\cap V)\)?][(said "the probability of the intersection of U and V" or "P of U cap V")]

You may think that the independence of \(U\) and \(V\) means we know very little about how they are related, but it's actually the opposite: We know they are completely unrelated. This unrelatedness means there must be a (fixed value for \(P(U\cap V),\)!Fixed Intersection.)(fixed-intersection) and further, it gives us (a symmetry!A symmetry.)(independent-symmetry).

(start fixed-intersection)

Recall that for our pair of independent events \(U\) and \(V,\) there can be no positive or negative correlation. Each of the chances \(P(U)\) and \(P(V)\) are fixed, and there must be some chance \(P(U\cap V)\) that they have together. If \(P(U\cap V)\) were any larger, then \(U\) and \(V\) would have a positive correlation, and if \(P(U\cap V)\) were any smaller, then \(U\) and \(V\) would have a negative correlation. So, there must be some fixed value for \(P(U\cap V)\) when there is no correlation at all.

(stop fixed-intersection)

(start independent-symmetry)

If \(U\) and \(V\) are actually independent events, then knowing that one event has happened should not affect the probability of the other. In particular, if we look at only situations where \(U\) occurred, we should get the same probabilities for \(V\) occurring within those as within all possible situations.

(stop independent-symmetry)

Now we can use that symmetry to find \(P(U\cap V)\). Here are two different but equally good ways to explain this step. (In terms of random sampling...)(random-sampling) (With a diagram...)(with-a-diagram)

(start random-sampling)

First note that with a total sample size of \(N\), if the sample size is large enough, the number of samples where \(U\) occurs should be about \(N\cdot P(U).\) If \(U\) is a set of possibilities chosen randomly as far as \(V\) cares, and if the sample size \(N\cdot P(U)\) of samples where \(U\) occurs is large enough, then the symmetry of independence says the number of those samples where \(V\) occurs should be about \((N\cdot P(U))\cdot P(V).\) Since these are the total number of samples where \(U\) and \(V\) both occur, it must be about equal to \(N\cdot P(U\cap V)\). After you divide both by the total sample size \(N\), you get the result \(P(U\cap V) = P(U)\cdot P(V).\) (All statistics assumes \(N\) is large enough to make the differences small enough to not care.)

(stop random-sampling)

(start with-a-diagram)

Consider the symmetry of independence with a Venn diagram: (What is this?)(what-venn-diagram) (Why we can use it?)(why-venn-diagram)

(start what-venn-diagram)

This diagram is meant to show geometrically how every situation is in one of the following cases:

- \(U\) happened, but not \(V\) (blue)
- \(V\) happened, but not \(U\) (red)
- \(U\) and \(V\) both happened (purple-ish)
- neither \(U\) nor \(V\) happened (green)

(stop what-venn-diagram)

(start why-venn-diagram)

We know all possible situations are illustrated because for each of the events, they must either happen or not happen. (This is the same reasoning that tells us that \(P(U)\) and \(P(\operatorname{not}U)\) add up to \(1\).)

(stop why-venn-diagram)

The symmetry says that \(P(V)\), how much of the whole rectangle is covered by the red \(V\) circle, needs to be equal to how much of the blue \(U\) circle is covered by the purple intersection. Since \(P(U)\) is how much of the whole rectangle is covered by the blue \(U\) circle, that means multiplying the two probabilities should give the probability for the purple intersection:

\[ \begin{aligned} &P(U)\cdot P(V) \\ &= \frac{\text{blue circle}}{\text{green rectangle}} \cdot\frac{\text{purple intersection}}{\text{blue circle}} \\ &=\frac{\text{purple intersection}}{\text{green rectangle}} \\ &= P(U\cap V) \end{aligned} \]

(stop with-a-diagram)

Now that we know what \(P(U\cap V)\) is, we can actually reduce all problems with independent probabilities to geometric ones using (a better diagram!Better diagram.)(better-diagram) than the generic Venn diagram in the last section.

(start better-diagram)

Since \(P(U\cap V)=P(U)\cdot P(V),\) it makes sense that we might want \(U\cap V\) to be represented by a rectangle with side lengths \(P(U)\) and \(P(V)\). If that rectangle were placed inside a shape with an area of \(1\) square unit, then the proportion of area covered by \(U\cap V\) would equal \(P(U\cap V).\) It just so happens that if you choose to put it inside a square with side length \(1,\) then this gives a full picture of the situation:

This works because it (meets all the conditions!Conditions.)(better-conditions). After drawing it, it (matches intuition!Intuition.)(better-intuition) about independent events. And it's a better diagram because you can (find all the probabilities geometrically!Geometric Probabilities.)(better-geometry).

(start better-conditions)

- The chance of a point in the square being in \(U\) is \(P(U).\)
- The chance of a point in the square being in \(V\) is \(P(V).\)
- The chance of a point in the square being in both is \(P(U)\cdot P(V).\)

(stop better-conditions)

(start better-intuition)

The intuition comes from measuring out the probabilities in independent dimensions. Think of it this way: If you choose a random point in the square, its \(x\)-coordinate tells you whether \(U\) happened, and its \(y\)-coordinate tells you whether \(V\) happened. Since independent variables told you each thing, knowing one doesn't affect the other. You can even extend this reasoning to three events, measuring out their probabilities as lengths on the edges of a cube.

(stop better-intuition)

(start better-geometry)

The chance of a point being in \(U\) or \(V\) is [\(P(U)+P(V)-P(U)\cdot P(V).\)][(found by adding the areas of \(U\) and \(V\) together and subtracting off the area of \(U\cap V\) since it was counted twice)]

The chance of a point being in \(U\) but not \(V\) is [\(P(U) - P(U)\cdot P(V)\)][(found by starting with the area of \(U\) and subtracting off the area of \(U\cap V\))] or [\(P(U)\cdot(1-P(V)).\)][(found by finding that the length from the top of the square to the top of \(U\) is \(1-P(V)\))] Similarly, the chance of a point being in \(V\) but not \(U\) is \(P(V)-P(U)\cdot P(V)=P(V)\cdot(1-P(U)).\)

The chance of a point not being in \(U\) or \(V\) is [\(1-P(U)-P(V)+P(U)\cdot P(V)\)][(found by starting with the whole square and subtracting off \(U\) and \(V,\) adding back \(U\cap V\) since it was subtracted twice)] or [\((1-P(U))\cdot(1-P(V)).\)][(found by finding the lengths from the top-left to the edges of \(U\) and \(V\) and multiplying them together)]

(stop better-geometry)

(stop better-diagram)

]]>I tend to have trouble drawing *things,* in the sense that I am horribly unmotivated to do so. In October of 2014, I bought a

I tend to have trouble drawing *things,* in the sense that I am horribly unmotivated to do so. In October of 2014, I bought a copy of Yoko Ono's Acorn, and immediately appreciated the drawings within.

So I went down to the art store, bought some charcoal and some big paper, and set out to try drawing feelings instead of things. Turns out, when I'm angry-like (specific emotions are hard to describe) in particular, drawing the feelings out can be therapeutic. It's like everyone was telling the truth about art the whole time.

At the end of it all, this is what my apartment wall looked like:

The blue painter's tape is a patented mounting system. (I'll probably choose a few to frame soon.)

Below are all of the individual works. Of them, I chose to hang up, from left to right: Unease, Complex, Crawling (rotated), and Dripping

These are listed in the order I made them. There were a few tiny works (as in, actually done on tiny paper) that I regarded as warm-up that I never took pictures of. But other than those, these were my first works with charcoal.

*October 11, 2014*

I'm still really pleased with how this one looks, though I'm not sure exactly why. Messy yet clearly defined, perhaps?

*October 12, 2014*

A friend gave me some pointers on shading with charcoal, and suggested I practice, so I did. This shape just... happened.

*October 13, 2014*

Inspired by the drawing on the page facing "Connection Piece VI" in *Acorn.*

*October 14, 2014*

Inspired by the drawing on the page facing "Wish Piece III" in *Acorn.* Moved on before I finished getting the shading the way I wanted.

*November 5, 2014*

Inspired by two different drawings on the pages facing "Room Piece II" and "Sound Piece IX" in *Acorn.* This shape, too, just... happened.

*December 7, 2014*

Largely unplanned. Definitely based on how I was feeling at the time.

*June 3, 2015*

Largely unplanned. I really prefer the way it looks when rotated, though this is the orientation used when making it.

]]>Heuristics are best described in tandem with algorithms. Both are (processes that return answers!Processes.)(process-definition), but the difference is: algorithms keep going until you find *the* answer, while heuristics do just enough to arrive at a reasonable guess. In order to make reasonable guesses, a heuristic must be a learning, trial-and-error process.

(start process-definition)

To be more clear, a process in this case refers to a set of instructions a human (or computer, or whatever) can follow in order to make a decision. No matter what, every decision you make must come from either an algorithm or a heuristic, using this definition.

(stop process-definition)

Algorithms are the best choice when you have unlimited computation time, but heuristics can be necessary when working under a time constraint or when the problem is too unwieldy to easily code a precise algorithm. (For example...)(virus-detection)

(start virus-detection)

Virus detection for antivirus software uses heuristics both to meet time constraints and lessen programming time. After all, the longer detection takes, the less processing time the user has for their tasks, and coding a "perfect" analysis is really unwieldy.

(stop virus-detection)

(Note: Set aside certain psychology results, for now.)(heuristics-psychology)

(start heuristics-psychology)

Though I'm using the same definition as you would in psychology, I'm not going to refer to any specific results or descriptions from that field, such as these well-known heuristics. I'm taking the computer science applications alone and using them to rethink the way we look at emotion.

(stop heuristics-psychology)

Consider the emotion of satisfaction. Certain interactions with things or people can make you satisfied or dissatisfied, and you tend to continue satisfying interactions and stop dissatisfying ones. This is an easy, effective heuristic, guessing at the answer to "Will doing this be good for me?"

This question is well-suited for a heuristic approach because there is usually not enough information to know *the* answer for certain, and the information you do have can be vast. Instances of uncertainty include choosing to visit a museum without knowing exactly what's inside and choosing to interact with someone without knowing exactly what they'll say or do. As for the vastness, basing your decisions off of previous experience makes sense, but waiting to greet a friend until you've remembered every previous interaction with them does not.

There are a number of details we can deduce, which I'll walk through, using the emotions of satisfaction and dissatisfaction as examples. I'm only going over the things that seem to apply to every emotion, so feel free to try to think of examples of the same things with different emotions. (Note that it is likely for different processes to be used by different emotions or different people, especially when some people don't seem to experience or rely on certain emotions.)

(

**Emotion has an accumulation on a scale.**)(emotion-accumulation)(start emotion-accumulation)

For example, when someone does something you don't like, you won't completely avoid them unless they do it more. The dissatisfaction seems to build up, and you become less and less likely to choose to interact with them. Another example is when you must choose between two satisfying options. To decide, you may compare the satisfaction you feel when imagining the choices, implying that you can put satisfaction on some sort of scale.

(stop emotion-accumulation)

(

**Emotion is often the first thing felt during recall.**)(emotion-first-felt)(start emotion-first-felt)

There are many examples of this, from being satisfied at the mention of your last birthday party without remembering exactly what happened in it, to declining to go to a store that felt dissatisfying without remembering why you dislike it. A more interesting example is when this conflates emotions, such as when a good friend does something very dissatisfactory, and then the mere mention of the friend feels both good and bad at once.

(stop emotion-first-felt)

(

**Emotion and memory recall affect each other.**)(emotion-attached-to-memory)(start emotion-attached-to-memory)

It's clear that remembering a satisfying event makes you feel satisfied, but it's also possible for feeling satisfied to remind you of satisfying events. Also, the inverse can be true, when you feel very dissatisfied and find it difficult to remember recent satisfying events.

(stop emotion-attached-to-memory)

(

**Emotion and experience affect each other.**)(emotion-experience)(start emotion-experience)

Once again, it's clear that experience affects emotion; the interesting thing is that it goes both ways. An example of this is with nostalgia, when someone keeps recalling certain events with satisfaction, and over time, the memory itself changes to become more satisfactory. (Note...)(note-ptsd)

(start note-ptsd)

This is a kind of positive feedback loop that is easy to form and can be detrimental. For instance, traumatic events can give rise to PTSD, where anything associated with the event immediately triggers negative emotions that can be even worse than those experienced during the event.

(stop note-ptsd)

It can be harder to notice how emotion also affects experience immediately, as the event occurs. For that, I recommend something like walking through a park twice, the first time periodically pausing to recall a dissatisfying event, the second a really satisfying one. What you notice each time will tend to be different in different emotional states, even though the same things may be happening around you.

(stop emotion-experience)

Given these properties, the model I use for emotion is the following. Each experience, as it happens or is remembered, affects and is affected by the current emotion. Then the experience is stored in memory, both with that emotion attached and with the memory attached to that emotion. The emotion stored is the sum of all experienced emotions on their various scales, to allow conflation.

The purpose of this entire process is to allow quick decision-making based on past events without having to explicitly recall each event in entirety, which is why the emotion is recalled first. This purpose also applies to subconscious decisions: You cannot perceive everything at once, so what you focus on and what gets saved to memory is affected by emotion. Since emotion is decided very quickly in the moment, upon later recollections, it may need to be altered. And lastly, it only makes sense to conflate emotions, since every bit is important for the decisions; there's rarely a case where the memories tied to one emotion should cancel out or overwrite another.

There are numerous pitfalls for heuristics in computer science, and the same concerns apply to emotions:

(

**The heuristic should have a well-founded theory.**)(heuristic-well-founded)(start heuristic-well-founded)

Since emotion is the product of an evolutionary trial-and-error, there's no firm theory behind it. In particular, it seems that everyone has different emotional schemas, and every one of them has flaws in dealing with certain situations.

(stop heuristic-well-founded)

(

**Sometimes a heuristic suitable for one case is overgeneralized to an unsuitable case.**)(heuristic-overgeneralized)(start heuristic-overgeneralized)

I'm sure everyone's encountered a time an innocent phrase triggered anger or sadness, even though the current context bears no resemblance to the time when the phrase deserved that response. (e.g., "You people? What do you mean, 'you people'?!")

(stop heuristic-overgeneralized)

(

**A heuristic may never find the goal, perhaps by skipping back and forth between two nodes.**)(heuristic-nonterminating)(start heuristic-nonterminating)

I've found that in certain emotional states my mind tends to get caught in "mind loops", going back over the same memories or internal conversations over and over. The term "mind/mental/thought loop" seems to be generally understood for meaning something along these lines, so I think it's safe to say many others experience this.

Note, however, that some mind loops are conscious and logical, actually recalling the events for each step, where an emotional mind loop would cycle through the emotional tags without needing to recall the actual memories. For example, a few anger-inducing memories can make you angrier over the course of the day as you repeatedly remember they happened, without you having to actually remember

*what*happened.(stop heuristic-nonterminating)

Eric has the game on his Linux computer. He's in a Hangout with

]]>Eric has the game on his Linux computer. He's in a Hangout with Joe. Eric wants to hear what Joe says and the game sounds, while Joe wants to hear what Eric says and the game sounds. So, we want to have the Hangout accept both Eric's microphone and The Dig's audio to send to Joe, while the Hangout's audio and The Dig's audio should go to Eric's headphones. (A diagram is in the first step of The Solution.)

After reading over this StackExchange post and messing with various loopback configurations, we were thoroughly confused about what to do and how to do it. Particularly, we really, really wanted some diagrams. (So this post will have plenty of them, and how to make them.) We also wanted some well-defined terms. (So this post will have a few of them.)

*Devices* are physical audio hardware like microphones, headphones, and speakers. *Virtual sinks* are virtual devices. Pulseaudio by default connects each non-device to exactly one device. To connect two non-devices together, a virtual sink must be used. To connect two devices together, a *loopback* must be used. A loopback has exactly one input and one output, but a device may have as many loopbacks going in and out of it as desired.

In this post, I'll talk about everything from the perspective of the inner workings, particularly the loopbacks. So an "input" will be a stream-producing entity, and an "output" will be a stream-accepting entity. This coincides with the idea of a microphone being an "input device" and a speaker being an "output device".

This means to list vertically all your inputs on the left and your outputs on the right, and then draw arrows between them indicating which should go where. In our problem, that looks like this:

Simply circling all your inputs and outputs that are real hardware devices is enough. To check which things are devices, you can find them in `pactl list`

and see if they have any properties named `device.<something>`

In our problem, we had a microphone and headphone that were both devices:

For each non-device, add a virtual sink to replace it. In our problem, we added three virtual sinks:

Now that everything in the middle is a device, each arrow in the middle represents a valid loopback, and you have a solution ready to implement.

You may be able to simplify your solution by removing some virtual sinks. Particularly, any virtual sink that has exactly one input and one output, with at most one of them a non-device, is unnecessary. In our problem, the fully simplified solution is:

In your design, be sure your virtual sinks and loopbacks are carefully *numbered*, not named, because when you implement your solution, the pavucontrol gui won't display any of their names, but will order them in the way you created them.

Use `pactl load-module`

to create the virtual sinks and loopbacks. In our example, we used the commands:

```
pactl load-module module-null-sink sink_name=Virtual1
pactl load-module module-null-sink sink_name=Virtual2
pactl load-module module-loopback sink=Virtual1
pactl load-module module-loopback sink=Virtual1
pactl load-module module-loopback sink=Virtual1
```

For each loopback, name the sink that should be its input, or at least one that isn't what you will make the output. (Why?)(why-inp-out)

(start why-inp-out)

You can change the input a sink ends up using without issue within pavucontrol, but it won't let you set the output to match what you name as the input here, since it'd theoretically form an immediate feedback loop.

(stop why-inp-out)

Keep track of the commands return. If things get out of hand, you can always restart by running `pactl unload-module <number>`

(for each module you just made.!For loops.)(bash-for) (If you forgot them, use `pactl list modules`

to find them again.)

(start bash-for)

The following line will unload every module from 52 to 56.

```
for i in {52..56}; do pactl unload-module $i; done
```

If you need to skip a few, you can hand-pick the numbers like this:

```
for i in 52 55 56; do pactl unload-module $i; done
```

(stop bash-for)

Run `pavucontrol`

to open a GUI for setting inputs and outputs. Then, for each arrow in your solution diagram, you get to verify with pavucontrol that it is set. This is complicated enough that I made a video to demonstrate it:

The notes from the video give the following steps:

(Check the application outputs.)(check-application-outputs)

(start check-application-outputs)

Braid should go to V2... (I'm using Braid as my game here instead of The Dig.) Now I can't hear the game.

Chrome should already go to Headphones, since I hear it... Analog Stereo is Headphones, so good.

(stop check-application-outputs)

(Check the loopback outputs.)(check-loopback-outputs)

(start check-loopback-outputs)

*While we're here...*They're all set to V1 because of the command when we created them, so only the last one has to change.

Third loopback to Headphones, which is still Analog Stereo. Good. Now I hear the game.

(stop check-loopback-outputs)

(Check the loopback inputs.)(check-loopback-inputs)

(start check-loopback-inputs)

All monitoring V2, so only the first has to change.

First loopback from Mic... aka Built-in Analog Stereo. I like to whistle and watch the volume bar to check which one it is. Good.

(stop check-loopback-inputs)

(Check the application inputs.)(check-application-inputs)

(start check-application-inputs)

Chrome from V1... All good!

(stop check-application-inputs)

In case you're interested in how I made the diagram images, I used LaTeX with the tikz package. For the last graphic, for instance, I used (this code.)(graphic-code)

(start graphic-code)

```
\documentclass [tikz]{standalone}
% based on an example taken from
% http://www.guitex.org/home/images/doc/GuideGuIT/introingtikz.pdf
\usepackage {tikz}
\usetikzlibrary {shapes}
\definecolor {processblue}{cmyk}{0.96,0,0,0}
\definecolor {processpurp}{cmyk}{0.4,0.8,0,0}
\begin {document}
\begin {tikzpicture}[-latex ,auto ,
semithick ,
device/.style ={ ellipse ,top color =white , bottom color = processblue!20 ,
draw,processblue , text=blue , minimum width =1 cm},
nondevice/.style ={inner color = processblue!20, text=blue, minimum width=1 cm},
loopback/.style ={text=processpurp, color=processpurp}]
\node[device] (mic) at (0, 0) {Microphone};
\node[device] (cin) at (5, -1) {$V_1$};
\node[nondevice] (realcin) at (8, -1) {Chrome Input};
\node[nondevice] (realdig) at (-3, -2) {The Dig};
\node[device] (dig) at (0, -2) {$V_2$};
\node[device] (hp) at (5, -3) {Headphone};
\node[nondevice] (realcout) at (-3, -4) {Chrome Output};
\path (cin) edge (realcin);
\path (realdig) edge (dig);
\path (realcout) edge (hp);
\path[loopback] (mic) edge (cin);
\path[loopback] (dig) edge (cin);
\path[loopback] (dig) edge (hp);
\end{tikzpicture}
\end{document}
```

(stop graphic-code)

Then after running `pdflatex`

on the file, I converted it to a png using ImageMagick with the command `convert -density 300 $SOURCE -flatten $DEST`

where:

`-density 300`

specifies the pixel density of the rendering (since you are rasterizing a vector)`$SOURCE`

is to be replaced with your source file`-flatten`

changes the background to be white instead of transparency`$DEST`

is to be replaced with your destination file

*Before getting into the "Why?", we must have the "What?".*

Every physics student learns (Newton's second law!Newton's second.)(newtons-second) pretty early.

(start newtons-second)

For every object, there is a value called [*inertial mass* \(m\)][(as opposed to *gravitational mass,* which dictates how strongly it pulls other objects with

*Before getting into the "Why?", we must have the "What?".*

Every physics student learns (Newton's second law!Newton's second.)(newtons-second) pretty early.

(start newtons-second)

For every object, there is a value called [*inertial mass* \(m\)][(as opposed to *gravitational mass,* which dictates how strongly it pulls other objects with gravity, and appears to coincidentally be the same thing.)] which dictates how much it resists [changes in motion.][(That is, changes from moving at a constant velocity, in a straight line, as dictated by Newton's first law.)] The exact effect is given by the equation:

\[ F = m a \]

The formula states that for [a particular force \(F,\)][(the "cause" of a change in motion)] you divide by the mass to get the resulting [acceleration \(a,\)][(the time derivative of velocity, which dictates the change in velocity over time)] so objects with more mass have smaller changes in motion.

(stop newtons-second)

The concept of impulse is given by (taking Newton's second and integrating!Integral definition.)(impulse-calculus) over time. (Or, if you don't have calculus yet...!Algebra definition.)(impulse-algebra)

(start impulse-calculus)

Remember that acceleration is the time derivative of velocity, so Newton's second law is

\[ F = m\frac{dv}{dt} \]

where the mass is constant, but the force and velocity are functions of time. Taking the integral over some time interval \(t_i\le t\le t_f,\) we get that

\[ \int_{t_i}^{t_f} F dt = \int_{t_i}^{t_f}m\frac{dv}{dt}dt = m(v_f - v_i) = m\Delta v \]

That is, if you find the total force acting on an object over a time interval, you get the mass times the change in velocity. The left-hand side is the quantity we call impulse, usually given in units of [\(\text{N}\cdot\text{s}.\)][(Newton-seconds, that is, force units times time units)]

(stop impulse-calculus)

(start impulse-algebra)

Without calculus, we assume that the force is constant over a time interval. For a particular object with fixed mass, Newton's second law \(F = ma\) then shows that the resulting acceleration is also constant over the time interval. When acceleration is constant, we get to use the standard kinematic equations so that [\(a = \frac{\Delta v}{\Delta t}.\)][(That is, the constant acceleration is equal to the change in velocity divided by the change in time, each calculated by subtracting the initial value from the final value.)] Then:

\[ F = ma = m\frac{\Delta v}{\Delta t} \\ F\Delta t = m\Delta v \]

That is, the force times the change in time gives the mass times the change in velocity. The left-hand side is the quantity we call impulse, usually given in units of [\(\text{N}\cdot\text{s}.\)][(Newton-seconds, that is, force units times time units)]

(stop impulse-algebra)

*If impulse is just force times time, and its use comes directly from Newton's second law, why learn it as a separate concept, with its own name and everything?*

The main reason is that after integrating Newton's second, (the other side is the change in momentum.)(change-in-momentum)

(start change-in-momentum)

The right side comes out to \(m\Delta v,\) which is equal to \(\Delta(mv),\) since mass is constant. Since \(mv=p,\) the momentum, this is exactly the change in momentum. (When mass is not constant and we have calculus...)(mass-varies)

(start mass-varies)

There are a [few cases][(most notably, rocketry)] where the mass of the object being considered changes dramatically during the time interval in which a force is applied. When that is the case, Newton's second law actually changes so that

\[ F = \frac{dp}{dt} \]

That is, the force is the rate of change of momentum. The product rule of derivatives shows that this form reduces to the original when mass is constant. (How?)(newtons-second-product-rule)

(start newtons-second-product-rule)

First, apply the definition of momentum \(p=mv\) and product rule:

\[ F = \frac{dp}{dt} = \frac{d}{dt}(mv) = m\frac{dv}{dt} + \frac{dm}{dt}v \]

and if \(m\) is constant, then its derivative is zero:

\[ F = m\frac{dv}{dt} + (0)v = m\frac{dv}{dt} = ma \]

and you are back to the original form.

(stop newtons-second-product-rule)

Further, if you take the integral of this expression now, you directly get that impulse is equal to the change in momentum.

(stop mass-varies)

(stop change-in-momentum)

The second reason is that in this form, it's clear how (Newton's third law implies conservation of momentum.!Conservation of momentum.)(newtons-third)

(start newtons-third)

Newton's third law is that every action (force) has an equal but opposite reaction (force). These force pairs always act between the same pair of objects, but in the opposite order, simply stated as, "When I push you, you push me back."

If we want to study the effect of Newton's third on two objects that are [only interacting with each other,][(so they have an equal-but-opposite force pair between them and no other forces)] all we know is that their mass-times-accelerations are equal by Newton's second. Since the objects are likely to have different masses, that makes their accelerations almost certainly different.

However, there actually is another constant for both objects when they interact: the time interval we are studying. And that's what defining the impulse deals with, by moving that time interval in Newton's second over to the other side, incorporating it with the force. Since Newton's third applies to the force at every instant, the impulse pairs are equal but opposite over any time interval, as well. And thus, the change in momentum for the two objects together is zero, and momentum is conserved. (If there are more than two objects...)(closed-system)

(start closed-system)

In this case, the condition for conservation of momentum is basically the same: we need [the objects in our system to only be interacting with each other.][(that is, every force on an object in the system is caused by another object also in the system)] A system that meets this requirement is called a *closed system,* and it must be the case that momentum is conserved when the system is closed, since all impulses on the objects come in equal-but-opposite pairs.

Further, note that if the system is not closed, then there is a force acting on an object we're observing whose reaction force we're not taking into account. For any time interval, the total impulse delivered by these *external forces* is exactly how much momentum is added to the system.

(stop closed-system)

(stop newtons-third)

]]>The integer \(8\) may be written as the sum of \(n\) consecutive integers, for some \(n\) strictly greater than \(1\

]]>The integer \(8\) may be written as the sum of \(n\) consecutive integers, for some \(n\) strictly greater than \(1\). There is only one such \(n\); what is it?

The following two sections discuss approaches to finding the answer to this initial question efficiently. This work led to both our second question and our methods of solving it.

*Neither Jesse nor I wanted to rely on this reasoning because it doesn't generalize well. I figured I'd discuss it because it does illustrate good initial exploration.*

For this particular case, reasoning per start-of-sequence seems like an okay idea. For each possible start, you can just keep adding larger values of \(y\) until the sum meets or beats \(8\), and then move on to the next \(x\). There probably aren't that many cases to even check.

Before starting, it would be good to check (whether you'll stop at some point)(per-x-stops).

(start per-x-stops)

For each \(x\), (does the search for \(y\) stop)(per-x-y-stops)?

(start per-x-y-stops)

It is true that, when we strive for a positive value \(8\), for each \(x\) we fix, there will be a unique smallest \(y\) to stop at with \(S(x,y)\ge 8\), and every longer sequence will just surpass \(8\) further.

(stop per-x-y-stops)

Do we (run out of \(x\)-values)(per-x-x-stops) to check?

(start per-x-x-stops)

Checking \(x=0\) is the same as checking \(x=1\), so we don't even have to worry about that case. Checking positive \(x\) will stop after \(x=8\), since any larger starting value already has \(S>8\). (Perhaps, at this point, you decide to check for solutions with positive \(x\).) Checking negative \(x\) seems more interesting, since the further negative the start, the further \(y\) can be and still possibly have \(S=8\). However, as you start adding larger \(y\), you'll notice that you keep cancelling out what you started with exactly. For instance, \(S(-2,2)=0\), so \(S(-2,5) = S(3,5)\):

\[ \begin{aligned} (-2) + (-1) + 0 + 1 + 2 + 3 + 4 + 5 \\ =3 + 4 + 5 \end{aligned} \]

Since we won't even get positive sums until we pass that cancellation point when [\(\newcommand\abs[1]{\left\vert{#1}\right\vert}y=\abs{x}\),][(when we start with the negative of what we end with, and they all sum to zero)] for every [\(x<0\),][(negative start point)] we may as well start by trying [\(y>\abs{x}\),][(stopping past the cancellation point)] and in that case, the result is the same as \(S(\abs{x}+1,y)\), a sum of a sequence of only positive integers. So we don't even need to check negative \(x\); any solution will have a corresponding sequence with positive \(x\) and the same \(S\).

(stop per-x-x-stops)

(stop per-x-stops)

Let's (check for solutions)(per-x-check).

(start per-x-check)

This kind of result is best presented in a table. For each positive \(x\le 8\), a smallest \(y\) with \(S(x,y)\ge 8\) is found. To understand why we are only working with those \(x\), click on "whether you'll stop at some point" above.

\[ \begin{array}{c|c|c} x & y & S(x,y-1) & S(x,y) \\ \hline 1 & 4 & 6 & 10\\ 2 & 4 & 5 & 9 \\ 3 & 5 & 7 & 12\\ 4 & 5 & 4 & 9 \\ 5 & 6 & 5 & 11\\ 6 & 7 & 6 & 13\\ 7 & 8 & 7 & 15\\ 8 & 8 & - & 8 \end{array} \]

The only positive solution is \(S(8,8)\), but that one is technically disallowed. Then the corresponding solution with negative elements in the sequence, \(S(-7,8)\), is the desired solution. (The correspondence is discussed in "whether you'll stop at some point" above.) Its length is \(n = y - x + 1 = 8 - (-7) + 1 = 16\). (Seven negatives, eight positives, and \(0\).)

(stop per-x-check)

*This is related to my reasoning. It’s pretty much how I talked through it with Jesse when I objected that there was no solution.*

Let’s study the (cases of possible \(n\))(n-cases), including the technically disallowed case \(n = 1\).

(start n-cases)

(The \(n = 1\) case.)(n-et-1)

(start n-et-1)

Then there is exactly one “solution” with the sequence containing only the integer \(8\).

(stop n-et-1)

(The \(n = 2\) case.)(n-et-2)

(start n-et-2)

Then the consecutive integers have opposite parity, and add to be odd. Since we want them to add to the even number \(8\), there cannot be a solution in this case.

(stop n-et-2)

(The \(n = 3\) case.)(n-et-3)

(start n-et-3)

Then the consecutive integers add up to \(3k\), where \(k\) is the middle integer. Since \(8\) is not divisible by \(3\), there cannot be a solution in this case.

(stop n-et-3)

(The \(n \ge 4\) case.)(n-ge-4)

(start n-ge-4)

We split this case up into two subcases:

(When all of the consecutive integers are positive.)(n-ge-4-pos)

(start n-ge-4-pos)

The consecutive integers sum to at least \(1+2+3+4=10\). Since \(8 < 10\), this subcase cannot produce a solution.

(stop n-ge-4-pos)

(When some of the consecutive integers are not positive.)(n-ge-4-neg)

(start n-ge-4-neg)

Let the smallest and largest numbers in the sequence be \(x\) and \(y\), and let the resulting sum be \(S(x,y)\). Since \(S > 0\) exactly when \(\abs{x} < \abs{y}\), a solution must satisfy that. Then a solution must have \(S(x,y) = S(\abs{x} + 1,y)\), and we know there will be a sequence with all positive values corresponding to whatever solution we find. We’ve already shown that the only positive sequence with sum \(8\) has \(n = 1\) and is \(S(8,8)\), but it does not count as a solution. The actual solution corresponding to it has \(n = 16\) and is \(S(-7,8)\).

(stop n-ge-4-neg)

(stop n-ge-4)

(stop n-cases)

*If you've skipped to this point, here's the notation: when a sequence is being considered, the smallest value is \(x\) and the largest is \(y\), and the sum \(x + (x+1) + \cdots + (y-1) + y\) is represented as \(S(x,y)\). Note that the length \(n\) is given by \(y - x + 1\), since \(y-x\) counts how many values come after \(x\).*

How special is \(8\) in this property? More thoroughly, what is the set \(K\) of integers \(k\) for which there is only one way to write \(k\) as a sum of at least two consecutive integers? (Why this was interesting to me...)(second-interesting)

(start second-interesting)

It seems like it should be a pretty restrictive property. There are a lot of sums out there, and as you consider larger numbers, there's even more sums to consider. So, there will probably be fewer elements of \(K\) as you look further from \(0\). But I don't even know whether \(K\) has an infinite number of elements off-hand — the elements could become more rare as we go larger, but still keep appearing, like the prime numbers, or there could be a point where so many sums are being considered that nothing has the property anymore.

(stop second-interesting)

How we approached this problem was to first get a feeling for what \(K\) might look like. Then we wanted to test our feelings, and divided up the next bit of work between the two of us to speed things up. Then we discussed our results and decided on how to prove our expectations were true.

*Before really digging in, we answered the following questions to get a feel for things.*

Do we need to study (negative elements of \(K\))(feeling-negative)?

(start feeling-negative)

No, because \(S(x,y) = -S(-y,-x)\) for every pair of integers with \(x\le y\). That implies that the possible sums to consider for a positive \(k\) are exactly the opposites of those to consider for its negative \(-k\).

(stop feeling-negative)

What are the (small elements of \(K\))(feeling-small)?

(start feeling-small)

We already know \(8\) is in \(K\), so let's find the nonnegative elements of \(K\) less than \(8\). \(0=S(-y,y)\) for every positive \(y\), so \(0\) is not in \(K\). \(1\) is really easy to check that \(S(0,1)\) is the only solution with at least two values in the sequence, so \(1\) is in \(K\). Continuing in this way, \(2\) and \(4\) are in \(K\), while \(3=S(1,2)=S(0,2)\), \(5=S(2,3)=S(-1,3)\), \(6=S(1,3)=S(0,3)\), \(7=S(3,4)=S(-2,4)\) show that those are not in \(K\).

(stop feeling-small)

What do we expect the (next value in \(K\))(feeling-expect) to be?

(start feeling-expect)

It seems so far that \(K\) only contains powers of \(2\). But that seems almost ridiculous, especially since we only have the elements \(1\), \(2\), \(4\), \(8\) to base it on. Regardless, \(16\) is a good next guess, not too close, not too far.

(stop feeling-expect)

Since checking values that end up in \(K\) tends to require more work than checking values that are not in \(K\), Jesse and I decide to split up our work like so: He'll tackle testing whether our guess for the next value in \(K\) is actually in \(K\), and I'll tackle showing that everything up to our guess isn't in \(K\).

*Jesse likes to make algorithms, so he decided to streamline the process of testing values in a way that meshed with our general observations.*

Can we make any (observations that generalize)(obs-general)?

(start obs-general)

As mentioned above, the reasoning per start value doesn't generalize well. One thing from both reasonings that generalizes well here is that we need only check the positive sequences. The second reasoning has other results that help here. For instance, we expect powers of two to be in \(K\), and there are certain values of \(n\) to exclude in that case. We showed \(n=2\) implies \(S\) is odd. We also showed when \(n=3\), \(S\) is divisible by \(3\). Since the most useful observations here are for fixed \(n\), it seems like that's what our algorithm will want to assume at each step. With that assumption, (another observation)(obs-fixed-n) makes the algorithm clear.

(start obs-fixed-n)

When \(n\) is fixed, notice that removing the smallest value and adding the next is the same as shifting all the values up by \(1\), so:

\[ S(x+1,y+1) = S(x,y) + n \\ \text{and more generally...} \\ S(x+z,y+z) = S(x,y) + nz \]

This is incredibly helpful, since that means you can just find one example sequence, such as \(S(1,n)\), and then the sums of the other \(n\)-length sequences are exactly the numbers that differ from \(S(1,n)\) by a multiple of \(n\).

(stop obs-fixed-n)

(stop obs-general)

What do those observations tell us (the general algorithm)(general-form-of-algorithm) should be?

(start general-form-of-algorithm)

We will check each possible \(n\) for sequences of \(n\) consecutive positive integers that sum to a \(k\) that is a power of \(2\). We assume \(n\ge 4\). (Why?!Why assume \(n\ge4\).)(why-n-ge-4) For each \(n\) afterwards, compute \(k-S(1,n)\), and check if it is a multiple of \(n\). If so, and it is the \(z\)th multiple of \(n\), then we have \(S(1+z,n+z)=k\). In this case, \(k\) is not in \(K\). (Why?!Why \(k\) is not in \(K\).)(why-k-not-in-K) In the other case, \(k\) may still be in \(K\), and we move on to test the next \(n\). We eventually reach an \(n\) such that \(S(1,n)>k\), at which point all \(n\) greater than it cannot have any positive solutions.

(start why-n-ge-4)

\(n=1\) has only the trivial solution \(S(k,k)=k\), \(n=2\) has only odd sums, and \(n=3\) has only sums divisible by \(3\).

(stop why-n-ge-4)

(start why-k-not-in-K)

We always also have the solution \(S(-k+1,k)=k\) corresponding to \(S(k,k)=k\), so there are now two ways to express \(k\) as a sum of at least two consecutive integers.

(stop why-k-not-in-K)

(stop general-form-of-algorithm)

How does this algorithm (apply to the \(k\) we want to test)(apply-algorithm)?

(start apply-algorithm)

We want to test whether \(k=16\) is in \(K\). As before, this is best presented in a table.

\[ \begin{array}{c|c|c} n & S(1,n) & k-S(1,n) & \text{divisible by $n$?} \\ \hline 4 & 10 & 6 & \text{no} \\ 5 & 15 & 1 & \text{no} \end{array} \]

and viola, we're done, because \(S(1,6)>k\). Thus \(16\) is in \(K\), as predicted!

(stop apply-algorithm)

*For elimination, I noticed that it would be easier to continue my logic from before to see which values could not be in \(K\) for each \(n\).*

(The \(n=2\) case.)(elim-n-et-2)

(start elim-n-et-2)

The logic before was that the sum of two consecutive integers is always odd. Moreover, the converse is true: every odd is the sum of two consecutive integers. (Why?!Why the converse is true.)(elim-every-odd) This means that every odd \(k\) cannot be in \(K\), except for \(k=1\), for which that sum of two consecutive integers is the trivial solution \(S(-k + 1, k)\).

(start elim-every-odd)

Every odd is an even plus one, so an odd \(k=2z+1\) for some \(z\). Then \(k=(z) + (z+1) = S(z,z+1)\), the sum of two consecutive integers.

(stop elim-every-odd)

(stop elim-n-et-2)

(The \(n=3\) case.)(elim-n-et-3)

(start elim-n-et-3)

The logic before was that the sum of three consecutive integers is always divisible by three. The converse is true here as well: every multiple of three is the sum of three consecutive integers. (Why?!Why the converse is true.)(elim-every-mult-of-3) Since the sequence of three consecutive integers is never the trivial solution, (Why?!Why sequence of three is nontrivial.)(nontrivial-sum-of-3) every multiple of three is not in \(K\).

(start elim-every-mult-of-3)

If \(k\) is a multiple of three, then \(k=3z\) for some \(z\). Then \(k=(z-1) + (z) + (z+1) = S(z-1,z+1)\), the sum of three consecutive integers.

(stop elim-every-mult-of-3)

(start nontrivial-sum-of-3)

The trivial solution \(S(-k+1,k)\) always has an even number of values in the sequence,\(n=2k\).

(stop nontrivial-sum-of-3)

(stop elim-n-et-3)

At this point, there are (only two \(k\) left to eliminate)(elim-k-left) with \(8\lt k\lt 16\).

(start elim-k-left)

They are \(k=10\) and \(k=14\). One thing to notice is that they are divisible by five and seven; perhaps that can be of use in the same way that we eliminated things divisible by three. Let's consider the \(n=5\) case. In such a case, the middle value is still the average of \(x\) and \(y\), \((x+y)/2\), and the values below are as far below as the values above, so the differences cancel out, and \(S(x,y)=5(x+y)/2\), which is divisible by five. And, sure enough, the converse works: for \(k=5z\), \(S(z-2,z+2)=5z=k\). This eliminates \(k=10\) with \(S(0,4)\). Similarly, for \(k=7z\), \(S(z-3,z+3)=7z=k\), eliminating \(k=14\) with \(S(-1,5)\).

(stop elim-k-left)

We can (generalize these results)(elim-divisible-by-not-2).

(start elim-divisible-by-not-2)

Suppose \(k=nz\) for some odd prime \(n=2m+1\). Then we can make a sequence of \(n\) consecutive integers whose sum is \(k\): \(S(z-m,z+m) = nz = k\). Since \(n\) is odd, this sequence presents a nontrivial solution, and \(k\) cannot be in \(K\). This generalized result implies that every element of \(K\) is at most divisible by two, so the powers of two are the only potential elements of \(K\), as predicted!

(stop elim-divisible-by-not-2)

*Jesse and I reconvened, agreeing that we had now eliminated everything that we predicted should not be in \(K\). All that remained was to show that everything else actually is in \(K\).*

At this point, it seemed clear that we had two cases to discuss when incorporating my new results into Jesse's algorithm.

(The \(n\) is odd case.)(inco-odd)

(start inco-odd)

From the generalized results above, we now know that the sum must be divisible by \(n\) in this case. For odd \(n>1\), no power of two is divisible by \(n\), so we may now skip all of these \(n\) in Jesse's algorithm.

(stop inco-odd)

(The \(n\) is even case.)(inco-even)

(start inco-even)

This case seems more interesting. We already know that if \(n=2\), then the sum is odd, and we skip that case when \(k\) is a power of two other than \(1\). We can generalize that further and say the same when \(n\) is divisible by two but not four, because then the sequence will contain an odd number of odd values, and the sum will be odd. That leaves the subcase of (when \(n\) is divisible by four)(inco-divis-by-4).

(start inco-divis-by-4)

Now we have to study the result of the sum more closely. In particular, we know that for each power of two \(k\), we will have the trivial solution \(S(-k+1,k)\) with \(n=2k\), so we will expect some solutions to come from this case. At this point, we noticed that the formula \(S(x,y)=n(x+y)/2\) still holds, since every pair of sequence values, starting with the outermost and working in, add up to \(x+y\). Since \(x+y\) is always odd for even \(n\), it contributes no factors of two to the sum; thus, \(S\) is divisible by \(n/2\) but not \(n\). Then the only power of two that \(S\) could be equal to is \(n/2\) itself, and this is exactly the trivial solution we expect: when \(k=n/2\), \(S(-k+1,k)=k\).

(stop inco-divis-by-4)

(stop inco-even)

]]>The whole point in showing the math

]]>The whole point in showing the math process is to record most of my mistakes and reasonings behind steps that normally just get thrown in the trash when writing a final draft. The upside is that readers get to see how I stumbled along the way, and how I dealt with it, and how far I got not knowing where it would take me. The downside is that the most primary information (the results, and their relationships) usually get lost in the noise.

I decided to solve this by having all my extra information about my mistakes and reasonings in side comments that the reader has to click to see. That way, a cursory glance over the document gives the backbone results, while any steps the reader is curious about how they came about can be investigated right there.

There are many ways to approach math process posts, depending on what your objective is:

If you just want to understand the results or the process primarily, always collapse or always expand the extra text, respectively, and simply read.

If you want to derive the results yourself, then leave the "proof" statements collapsed and try to prove each claim as it comes by. Expand each to check yourself when you finish the proof.

If you want to derive the process yourself, then only read anything when you have a good guess as to what it will say. When you have no idea what's coming, put it down and try to investigate things and make some guesses.

These three ways of approaching my math process posts actually apply to all formal math reading. It's just normally, everything is written together, and it takes practice to identify which bits are what you want to avoid reading. In general, the more time you try to predict text before reading it the better, but you have to balance that with the time you have to spend.

]]>