Tag Archives: blaggregator

Aphantasia

I’m pretty sure I have aphantasia, but I’m not sure that I’ve always had it. Maybe my brain is weird! I’m writing this up just in case it’s interesting or helpful to someone else.

If you haven’t heard of it before, aphantasia means the inability to visualize things. If someone presents me with the classic example, “visualize an apple”, I can close my eyes and see… nothing. I can tell that my brain is attempting to project or identify an apple, but there’s no visual apple there. The thing that really tipped me off to having aphantasia was the Ball on the Table experiment. I won’t spoil it for you here, but just know that my result was 100% in line with aphantasia.

That said, I do seem to have the ability to recall visual memories. A bowl of apples that I saw this evening sitting on my living room table appears in my mind, though faded, and it feels as though my brain is exerting effort to recall the image. (I know, I know, there are no nerves in the mind. But sometimes it feels like certain parts of my brain are “straining” to work on something. The apples, for example, seem to be loading from my upper right hemisphere.)

An interesting complication that this presents is clothes shopping. On noticing an item on the sale rack, I’m unable to visualize myself wearing it – I have to actually try it on and look in the mirror. I’ve learned to rely on other heuristics for determining if an item is likely to look good on me.

I don’t seem to have aphantasia when I dream – my dreams are quite vivid, though never lucid. I don’t have exceptional recall of them, except for a handful. This seems to align with the research: a 2015 paper [pdf] reports that many folks with aphantasia do have visible dreams, or involuntary image flashes.

I sometimes get these involuntary image flashes, almost like a hallucination – that is, I can’t control the content of the image flash – by opening my eyelids just slightly and clearing my mind. It happens more when I am tired or on the verge of sleep. My eyes gently focus on nothing, and all of a sudden I get intense, photographic scenes – sometimes abstract, like grotesque faces morphing around, and other times concrete, like a scene of small sprites dancing around in a circle. The images don’t last long, and if I try to focus on them too much, they disappear or run to the edges of my vision like an eye floater. I think the image flashes are relatively new, as I’ll discuss below. They’re not unpleasant, though, and in fact are quite fun to see.

As a child, I was identified as a “visual-spatial” learner, and this lines up with various things that I can still do now. I remember songs on bass guitar by the “shape” that the notes make through time as I play them on the neck. I’m pretty good at games that require quick visual thinking (Galaxy Trucker), visual recognition (Spot it), or – perhaps unrelated – word recall (Anomia). By comparison, I’m not great at audio, somatic, or pop culture games.

That said, I don’t think I’ve always had aphantasia! And this is a new revelation to me, which came as I was discussing it with a friend and starting to actually take stock of it. I realized that I do have some memories of visual daydreams (aka fantasies, although that makes it sound more nsfw than it is), that I had when I was younger. The main one that comes to mind is from when I was probably 14 or 15 – I can recall the visuals of the situation clearly, as though they were a memory, yet I know that I was awake when I came up with this daydream.

Interestingly, I found a 2021 paper about a person who had acquired aphantasia from COVID-19. This immediately resonated with me. I seem to recall having a richer visual mind before I was sick with Covid in November of 2020. I’m hesitant to diagnose myself based on a single paper (or really, a single sentence in a Wikipedia article), but at the very least, this is very interesting to me in a way that feels like it could be correct.

That said, the fact that I have some visual memories of self-generated imagery before 2020, combined with the visual flashes that seem to be a new phenomenon, points to something weird going on. I did not lose my taste or smell during any of my (now three) bouts with Covid.

Taking a slight detour, a 2021 study found a correlation between aphantasia and anauralia, the inability to “visualize” music or audio in the mind. I find myself, perhaps with hyperauralia: I sometimes find an entire symphony or rock song playing in my head with full instrumentation, all playing at the same time. A “phonographic” memory, one might say. This happens for music I’ve heard as well as music that I’m coming up with, although usually the music that I invent is simpler and has fewer layers.

A strange thing about this that I haven’t come across before is that it seems to happen more as I get stressed or tired. Sometimes, I get exhausted (and a kind of brain fog creeps in), and I lay down to take a nap only to find a precisely-recreated song playing in my mind. As I drift off to sleep, the song will suddenly disappear, which coincides with a relaxing of my muscles and body. Perhaps I fell asleep for a moment and woke up? One time, the music turned off with an audible (to me) “POP”, which startled me. But I haven’t had that happen again, and I’m not sure I’ll ever get an explanation for it.

I’m certainly not an expert on this, and I only recently discovered /r/Aphantasia where a lot of information is being shared about this rare(?) condition. I’m quite happy to talk about this if you have questions and know me personally, or if you want to drop me an email! Brains are weird!!

(Thank you to Elena Stabile for reviewing and editing this post)

CUDA – Four

I’ve been busy with other things, but I woke up early and decided to get some CUDA studying in. I did talk with the hiring manager for the position that I’m interested in, who (as I expected) clarified that I didn’t actually need to know CUDA for this position. I’m still interested, though I should focus more on the Leetcode-style exercises that are more likely to come up on the interivew.

That said, I haven’t been entirely ignoring this. I’ve been watching some 3Blue1Brown videos in my spare time, like this one on convolution. My calculus is definitely rusty (I don’t fully remember how to take an integral), but I’m mostly just trying to gain some intuition here so that I know what people are talking about if they say things like, “take a convolution”.

For today, I started by looking through the source of the sample code I got running last time. Thanks to the book I’ve been reading, a lot of the code makes sense and I feel like I can at least skim the code and understand what’s going on at a syntax level, for example:

__global__ void increment_kernel(int *g_data, int inc_value) {
  int idx = blockIdx.x * blockDim.x + threadIdx.x;
  g_data[idx] = g_data[idx] + inc_value;
}

Writing this mostly for my own understanding:

The __global identifier marks this as a Kernel – code that is called from the host but runs on the device. It takes in a pointer to an array g_data and an int inc_value. This kernel will be run for each element in the g_data array and each instance of the kernel will operate on the element calculated in idx. Each thread block of blockDim threads will have a unique blockIdx and each thread in that block will have a unique threadIdx. Since we are working on 1D data (i.e. a single array, and not a 2D or 3D array), we only care about the x property of each of these index variables. Then, we increment the value at index idx by the inc_value.

Ok, writing this up I think I have one question, which is about the .x property. The book explains that you can use the .x, .y, .z properties to easily split up 2D or 3D data, but also talks about ways to turn 2D or 3D data into a 1D representation. So are the .y, .z properties just “nice” because they allow us to leave 2D data as 2D, or do they actually allow us to do something that re-representing the 2D data as 1D data and just using .x doesn’t?

Ok, continuing on:

int main(int argc, char *argv[]) {
  int devID;
  cudaDeviceProp deviceProps;

  printf("[%s] - Starting...\n", argv[0]);

Start the main function and set up some variables, as well as letting the user know that we’re starting.


  // This will pick the best possible CUDA capable device
  devID = findCudaDevice(argc, (const char **)argv);

  // get device name
  checkCudaErrors(cudaGetDeviceProperties(&deviceProps, devID));
  printf("CUDA device [%s]\n", deviceProps.name);

Some questions here. What does it mean by “best”? Fortunately, the source for findCudaDevice is available to us. First it checks to see if a device is specified by command line flag, and if not, grabs the device “with “with highest Gflops/s”.

  int n = 16 * 1024 * 1024;
  int nbytes = n * sizeof(int);
  int value = 26;

  // allocate host memory
  int *a = 0;
  checkCudaErrors(cudaMallocHost((void **)&a, nbytes));
  memset(a, 0, nbytes);

Setting some variables first, but then we allocate some host memory. I was curious about cudaMallocHost. In the other examples I’d seen, host memory was usually created by just using malloc (or simply assumed to already be allocated, in the book). cudaMallocHost creates “pinned” memory, which is locked into RAM and is not allowed to swap. This allows us to use e.g. cudaMemcpy without the performance overhead of constantly checking to make sure that the host memory has not been swapped to disk.

I’m still not used to the C convention of handling errors via macros like checkCudaErrors instead of language constructs like try/catch or if (err != nil). It just feels like an obsolete way of doing error handling that’s easy to forget.

That’s all I had time for this morning, but it’s fun to understand more and more about this as I continue to learn!

CUDA – Three

I ran a CUDA program 🙂

It was a rough experience 🙃

Honestly, getting started with pretty much any programming language involves a lot of banging your head against the toolchain, and slowly untangling old tutorials that reference things that don’t exist anymore. Honestly, this was easier than some python setups I’ve done before.

I started with a pretty sparse windows installation. I keep my computers relatively clean and wipe them entirely about once a year, so all I had to start was VSCode and … that’s about it. I am lucky that I happen to already have a windows machine (named Maia) that has a GTX 2080, which supports CUDA.

I installed MSVC (the microsoft C++ compiler) and the NVIDIA toolkit.

Then I tried writing some C++, not even CUDA in VSCode and I couldn’t get it to compile. I kept getting the error that #include <iostream>was not valid. As I mentioned, I haven’t written C++ in about 10 years, so I knew that I was likely missing. I putzed around installing and poking various things. Eventually I switched out MSVC for MINGW (G++ for windows) and this allowed me to compile and run my “hello world” C++ code. Hooray!

Now I tried writing a .cu CUDA file. While NVIDIA provides an official extension for .cu files, and I had everything installed according to the CUDA quick start guide, But VSCode just did … nothing when I tried to run the .cu file with the C++ CUDA compiler selected. So I went off searching for other things to do.

Eventually I decided to install Visual Studio, which is basically a heavy version of VSCode and I don’t know why they named them the same thing except that giant corporations love to do that for whatever reason.

I got VS running and also downloaded Git (and then Github Desktop, since my CLI Git wasn’t reading my SSH keys for whatever reason.

Finally, I downloaded the CUDA-samples repo from NVIDIA’s Github, and it didn’t run – turns out that the CUDA Toolkit version number is hard-coded in two places in the config files, and it was 12.4 while I had version 12.5. But that was a quick fix, fortunately.

Finally, I was able to run one on my graphics card! I still haven’t *written* any CUDA, but I can at least run it if someone else writes it. My hope for tomorrow is to figure out the differences between my non-running project and their running project to put together a plan for actually writing some CUDA from scratch. Or maybe give up and just clone their project as a template!

 

CUDA – Two

I have an art sale coming up in three days, so I’m spending most of my focus time finishing up the inventory for that. But in my spare time between holding the baby and helping my older kid sell lemonade, I’ve started exploring a few of the topics I’m interested in from the previous post.

Convolutions

Something I was reading mentioned convolutions, and I had no idea what that meant, so I tried to find out! I read several posts and articles, but the thing that made Convolutions click for me was a video by 3 Blue 1 Brown. The video has intuitive visualizations. Cheers to good technology and math communicators.

Sliding a kernel over data feels intuitive to me, and it looks like one of the cool things about this is that you can do this with extreme parallelism. I’m pretty sure this is covered early on in the textbook, so I’m not going to worry about understanding this completely yet.

It seems like convolutions are important for image processing, especially things like blur and edge detection, but also in being able to do feature detection – it allows us to search for a feature across an entire image, and not just in a specific location in an image.

One thing I don’t understand yet is how to build a convolution kernel for complicated feature detection. One of the articles I read mentioned that you could use feature detection convolution for something like eyes, which I assume requires a complicated kernel that’s trained with ML techniques. But I don’t quite understand what that kernel would look like or how you would build it.

Parallel Processing

I started reading Programming Massively Parallel Processors, and so far it’s just been the introduction. I did read it out loud to my newborn, so hopefully he’ll be a machine learning expert by the time he’s one.

Topics covered so far have been the idea of massive parallelism, the difference between CPU and GPU, and a formal definition of “speed up“.

I do like that the book is focused on parallel programming and not ML. It allows me to focus on just that one topic without needing to learn several other difficult concepts at the same time. I peeked ahead and saw a chapter on massively parallel radix sort, and the idea intrigues me.

Differentiation and Gradient Descent

Again, 3B1B had the best video on this topic that I could find. The key new idea here was that you can encode the weights of a neural network as an enormous vector, and then map that vector to a fitness score via a function. Finding the minimum of this function gives us the best neural network for whatever fitness evaluation method we’ve chosen. It hurts my brain a bit to think in that many dimensions, but I just need to get used to that if I’m going to work with ML. I don’t fully understand what differentiation means in this context, but I’m starting to get some of the general concept (we can see a “good direction” to move in).

I haven’t worked with gradients since Calc III in college, which was over a decade ago, but I’ve done it once and I can do it again 💪. It also looks like I need to understand the idea of total derivative versus partial derivative, which feels vaguely familiar.

Moving Forward

Once the art sale is over, I’ll hopefully have more focus time for this 🙂 For now, it’ll be bits and pieces here and there. For learning CUDA in particular, it looks like working through the textbook is going to be my best bet, so I’m going to focus some energy there.

From Grand Rapids,
Erty

 

CUDA – One

First, some backstory. I was laid off from Google in January and I’ve taken the last six months off, mostly working on art glass and taking care of my kids (one of whom was just born in April, and is sleeping on my chest as I write this). I’m slowly starting to look for work again, with a target start date of early September 2024. If you’re hiring or know people who are, please check out my résumé.

A friend of mine recently let me know about a really interesting job opportunity, which will require working with code written in (with?) CUDA. The job is ML related, so I’ll be focusing my learning in that direction.

I don’t know anything about CUDA. Time to learn! And, why not blog about the process as I go along.

First step: come up with some resources to help me learn. I googled something like “learn cuda” and found this Reddit post on the /r/MachineLearning subreddit. It looks like I’ll probably be learning a couple of related topics as I go through this journey:

 

CUDA

This is the goal. It looks like CUDA is a language + toolkit for writing massively parallel programs on graphics cards, that aren’t necessarily for graphics. Basically, making the GPU compute whatever we want. If we use this for, say, matrix multiplications, we can accelerate training of ML models.

Python and C++

C++ ? I haven’t written C++ since college a decade ago. I think I remember some of it, but I’ve always been intimidated by the size of the language, the number of “correct” ways to write it, and the amount of magic introduced by macros. I also don’t like the whole .h / .cc thing, but I suppose I’ll just have to get used to that.

I’m pretty good at Python, having written several tens of thousands of lines of it at Google, so I’m not super worried about that.

PyTorch or TensorFlow

Some folks on the Reddit post linked above recommend a specific tutorial on the PyTorch website, which looks interesting. It seems like PyTorch is a ML library written in Python (based on Torch, which was written in Lua).

PyTorch is Meta, now under Linux. TensorFlow is Google. Both use C++, Python, and CUDA.

Matrix Math

In college, I was only briefly introduced to matrix math, and most of that exposure was a graphics course that I audited. Based on my brief reading about all of this, it seems like the major advantage of using graphics cards to train ML is that they can do matrix math really, really fast. It’s up to me to brush up on this while I explore the other things. I don’t yet have a specific study plan for this.

Parallelism

According to redditor surge_cell in that previously linked thread, “There are three basic concepts – thread synchronization, shared memory and memory coalescing which CUDA coder should know in and out of [sic]”. I’ve done some work with threading and parallelism, but not recently. Most of my work at Google was asynchronous, but I didn’t have to manage the threading and coalescing myself (e.g. async in JS)

Resources

Ok – so, what am I actually going to do?

I browsed some YouTube videos, but the ones that I’ve watched so far have been pretty high level. It looks like NVIDIA has some CUDA training videos … from 12 years ago. I’m sure the language is quite different now. I also want deeper training than free YouTube videos will likely provide, so I need to identify resources to use that will give me a deep knowledge of the architecture, languages, and toolkits.

First, I’ll try to do the Custom CUDA extensions for PyTorch tutorial. See how far I can get and make notes of what I get stuck on.

Second, One of the Reddit posts recommended a book called Programming Massively Parallel Processors by Hwu, Kirk, and Hajj, so I picked up a copy of that (4th Ed). I’m going to start working through that. It looks like there are exercises so I’ll be able to actually practice what I’m applying, which will be fun.

Finally, I’ll try implementing my own text prediction model in ML. I know you can do this cheaply by using something like 🤗 (aka HuggingFace) but the point here is to learn CUDA, and using someone else’s pretrained model is not going to teach me CUDA. I’m optimizing for learning, not for accurate or powerful models.

Questions

There’s a lot I don’t know, but here are my immediate questions.

  1. I have an NVIDIA card in my windows computer, but I don’t have a toolchain set up to write CUDA code for it. I’m also not used to developing C++ on windows, so I’ll need to figure out how to get that running as well. I have a feeling this won’t be particularly tricky, it’ll just take time.
  2. I have a lot of unknown unknowns about CUDA – I’m not even sure what I don’t know about it. I think I’ll have more questions here as I get into the materials and textbooks.
  3. It seems like there’s a few parts of ML with various difficulties. If you use a pretrained model, it seems pretty trivial (~20 lines of python) to make it do text prediction or what have you. But training the models is really, really difficult and involves getting a lot of training data. Or, perhaps not difficult, but expensive and time consuming. Designing the ML pipeline seems moderately difficult, and is probably where I’ll spend most of my time. But I need to understand more about this.

That’s it for Day One

If you’re reading this and you see something I’ve done wrong already, or know of a resource that helped you learn the tools that I’m talking about here, please do reach out!

From Grand Rapids,
Erty

ADHD / Atomoxetine

Edit: added some updates 2024-03-01
Edit: I’m no longer taking Atomoxetine because the side effects were just too much. But I’m leaving this post in case it helps someone in the future!

I’ve been quite open about the fact that I was diagnosed with ADHD in December of 2021, and in January I started taking Atomoxetine (brand name Strattera) for it. I think it’s important to talk about neurological health and mental health; the stereotype of guys not talking about mental health is true. When I revealed to my friends that I had ADHD, several of them confided to me that they had a similar diagnosis – which I didn’t know! I think if we’d all been open about it, the process would have been easier for me to navigate. So: I have ADHD and am happy to talk about it.

Everything else in here is Not Medical Advice. Talk to your doctor. I just want to share my experience.

I was *very* nervous about getting a prescription for ADHD medication, since I feel like I get addicted to things easily, and I’d heard that many of the drugs on the market (especially the stimulants like Adderall) were habit-forming. The idea of adding something to my life that would be difficult to remove later really scared me.

To theorize for a moment, I think that the availability of only the stimulants, and perhaps also a lack of data on correct dosages + lack of time release pills led to the “adhd zombie” stereotype that I saw a lot of in the oughts and tens when I was really struggling to exist in the neurotypical-centered school system, and really could have used the diagnosis. I was afraid of this “zombie” outcome, which is part of why it was only after lots of therapy that I was willing to attempt a diagnosis.

My psychopharmacologist ($10 word) told me that there are newer (well, since 2002, compared to Ritalin in 1944(!)) non-stimulant drugs for ADHD that are less likely to be effective, but without the habit-forming effects. With my doctor, I decided to try Atomoxetine (brand name Strattera), and I’ve been on it since January.

I only have a layperson’s understanding of how Atomoxetine works, but here’s my attempt: Atomoxetine is an noradrenaline reuptake inhibitor. This means that noradrenaline that I produce will linger in my system for longer, leading to an increase in my alertness level, but without artificially increasing my noradrenaline creation or release – just the rate at which it’s reabsorbed if not used.

The effect is that I no longer need to stim as much to keep alert and not bored. And as a dad of a three-year-old who wants to play doctor for the 1000th time, not needing new stimulation to avoid wandering away or becoming distracted allows me to stay focused on play with my daughter, which makes it incredibly worth it. Being able to focus at work is also good 🙂 The medicine just makes it like, 30% easier to say “no” to distractions (e.g. phone, internet) which is enough for me.

My Experience with Vyvanse

After taking Strattera for a while but getting sick (lol) of the side effects, I finally got over my fear of Stimulants and tried Vyvanse (I can’t recall the dosage, but IIRC I started at half the normal amount, so, 10mg? 20mg?). My layperson’s understanding is that Vyvanse is basically Adderall (amphetamine salts) with a lysine amino acid attached. Your body metabolizes off the lysine at a certain rate, which then activates the stimulant. Because your body can only metabolize so much lysine at once, it causes the stimulant to slowly enter your system instead of all at once, making for a smoother experience.

At least, that’s what’s supposed to happen. Even on half the normal dosage, I ended up feeling incredibly high and light headed for the two days that I was on, and my memory and executive function were much worse than normal. I took a half dose and then nothing, but I still experienced a deep depression afterward for a few days.

I talked with a relative who also has ADHD and they said they had the same experience with Vyvanse/Adderall, so maybe we just metabolize it in a weird way? I’m sure stimulants work for some folks but they don’t seem to work for me. I did like that you could split the dose up or mix it in with food.

Erty’s Non-Medical-Advice guide to Atomoxetine/Strattera

This is my own experience based on about two years of taking Atomoxetine. The main side effects I got were nausea/lightheadedness, high heart rate, anxiety, and some acid reflux/heartburn. However I’ve mitigated most of them using the methods below.

Heartburn

The heartburn isn’t super bad and I can take tums for it. Sometimes it leads to a phlegmy throat, which goes away with tea and throat clearing. I’m not entirely sure that this is related to the Atomoxetine, it might just be that I’m getting old, lol.

Nausea / Anxiety

I think the nausea and anxiety are related to an empty stomach OR too much meds at once. My doctor started me at 40mg which I eventually figured out was way too much for me (For reference: I’m 130lbs) so I dropped to 25mg and felt much better while still feeling like I was getting the benefits of the medicine. I also found that I could resolve the nausea/anxiety by eating a large, protien-filled meal. For example: I would get the side effect if I ate a bowl of cereal (carbs), but not if I ate a bowl of cereal with a bunch of peanut butter on top (oils/fats/protein). I finally tried 10, and am now on 18 (just starting it!). So it took me a while to find the “right” dose.

I also think the nausea is worse at the beginning of the meds (or if you skip a few days). It seems like my body got used to the meds and figured out a homeostasis with them that didn’t involve nausea. Thanks, body.

The anxiety also seemed to crop up when I ended up with an empty stomach and meds still in my system. This was most pertinent when I would wake up in the middle of the night and feel “chemically anxious” – not anxious about anything in particular but just like, anxious. I eventually realized this was because my stomach was empty. I started eating a healthy midnight snack just before bed and that helped resolve it. I also started taking Magnesium Glycinate supplements which really helped reduce the anxiety.

High Heart Rate

I found that my heart rate would spike if I drank caffeine at the same time as my meds – like, a resting heart rate of 90 when it’s usually more like 55-65. In fact, I was able to completely drop my two-cups-a-day caffeine habit without any withdrawal. Nowadays I can have one cup of coffee in the morning without making myself too anxious, although I sometimes opt for decaf or half-caf.

Taking the Meds at Night

At the advice of my doctor, I tried taking the meds at night right before I went to bed. The idea was that the worst of the nausea happens a few hours after I take the pill, so if I’m asleep, I won’t notice. This worked ok! But it also made me wake up at 5am or so most nights. That … might be a good thing? But I don’t know, I ended up more tired later and my sleep schedule was sometimes biphasic which left me exhausted the next day.

Eventually I switched back to taking the meds in the morning / early afternoon.

Overview

I don’t like eating a big protein breakfast though, so my current (working for me) system is:
– Morning: Drink a small cup of coffee with a very light or no breakfast.
– Around noon: Eat a large lunch with some protien/oils/fats and take 25mg atomoxetine.
– Around 6pm: Eat a normal dinner.
– Around midnight: Eat a medium meal with Mg supplement and some sleepytime tea, fall asleep.

All in all, I’m quite happy with the meds. I don’t notice anything really different when I take them, but when I don’t take them, I can tell that it’s much harder to get things done, and my partner especially notices. In fact, there are days where I’ve later said, “Oh, I forgot to take my meds yesterday” and she’s replied,”oh, that makes yesterday make so much more sense”. So obviously they’re doing something.

I hope this helps someone!

From Grand Rapids,

Erty

Mental Model Metacognition

I’ve been seeing a therapist recently, and it’s been quite nice to be able to take an hour to introspect my mental processes with help from a professional. I also enjoy the time set aside to focus on myself – I don’t feel like I’m taking over the conversation or being selfish with the time.

Aside: I’m not working through any particular issue or trauma in these sessions – a fact that reminds me how messed up the health care system is. Folks who would really benefit from therapy are unable to afford it, while my employment and insurance provide it to me for very cheap. I acknowledge that I’m very privileged in this way.

I think about the idea of a Mental Model a lot, both for myself (“what is my mental model of X”) and for others (“how can I teach Y so that the student has a good mental model”). Teaching, I like to say, is the act of building and debugging mental models in others.

When we interact with the world, we use our mental models to predict what will happen given certain conditions and actions. When we get a mental model wrong, it can be bad, embarrassing, or harmful. Having an incorrect mental model of how a stove works might cause a burn, for example.

Therapy like mine, then, is a way of airing out my own mental models. I show them to someone else and they give me feedback. In this way, I refine the mental models to make better predictions and live a better life.

A 2019 post by Alok Singh titled Mental Model of Dental Hygene got me thinking about the practice of publicly airing out a mental model. I think about Alok’s post often (a lot of time while brushing my teeth, natch), and applying this technique in my own life.

Journaling surely helps in this way as well – the act of organizing a mental model so that it can be written, and viewed as a whole, allows for a different kind of processing. But a journal also doesn’t provide feedback. There’s a certain risk that people take when sharing a mental model with the world. This is perhaps why a journal is kept private, and therapists have laws around confidentiality. The more risk you’re willing to take on, the more and wider feedback you can get.

But why is there a risk? Here’s my mental model (ha ha!) about what that risk is:

  1. A wrong mental model can be embarrassing
    1. People don’t like feeling like they’re wrong, and especially in ways that form a foundation for other thought. Revealing a mental model that’s wrong can invite scorn, teasing, and other humiliation.
    2. Sometimes, this causes people to double down on a wrong mental model instead of abandoning it.
  2. The mental model reveals a deeper kernel that’s shameful
    1. One might reveal a mental model that relies on an assumption that’s e.g. racist or sexist, which would cause them to lose respect or face repercussions.
  3. The mental model conflicts with a political, ideological, or commercial standpoint
    1. You may find that people are resistant – even physically – to a mental model being shared. Purely for example: Alok’s post might run afoul of folks who believe in conspiracy theories about Fluoride, or dentists who make money off of fixing cavities.
    2. I find myself often overweighting this risk. The odds are low but the penalty is high.

But perhaps it’s in the face of this risk that sharing a mental model becomes even more important – you can simultaneously retool your own process and at the same time influence someone else’s.

I sat down today to write out my mental model of Dopamine and focus but wrote this instead, which is apropos. I’ll have to follow up with another post.

From Grand Rapids,
Erty

 

Teaching Programming

Note: This is a post I drafted in 2017 and am publishing now in 2020.

My Background

I’m working as a tech educator now – my official title is “Lead Instructor”. I even have a super fancy business card:

business-card

This is after “years of industry experience” and “many hours of classroom instruction”, specifically volunteer work through TEALS-K12 teaching computer science in Brooklyn, work-study teaching Python to 7-12th graders during college, and teaching a self-designed actionscript/flash curriculum to peers during high school.

I think a lot about education. I think a lot about quotes like @LiaSae’s because I agree with it. I believe the current problems with the American school system are problems of economics, not education. The problem is that it’s the “Silicon Valley Jerks” like myself that have the wherewithal (read: money/privilege) to try crazy things like coding schools, but we don’t have the teaching background. Someone needs to combine the two.

Teaching Certificates and Regulation

The boot camp I work for is accredited by the state (CODHE). The application process is rigorous and it takes hundreds of hours to put together the documents. This ensures that CODHE has reviewed our curriculum and approved it. But, this means that if we come up with a “better” curriculum that we’d like to try, there’s no room to pivot. We would have to go through the approval process again, which nobody wants to do. Pros and cons.

(Edit from 2020: we actually could pivot. As far as I can tell there was very little oversight and so calling “audibles” to switch out materials or teach something different was totally fine, as long as we believed it would benefit the students. The state just wanted to make sure we were actually thinking through this stuff? The company is now defunct so I feel like I can admit this).

I don’t have a teaching certificate. As far as I can tell, the need for programming teachers is so huge right now that they’ve basically dropped that requirement. You also don’t need a teaching degree at a private institution like the one that I’m teaching at now. I understand that this is a huge gap in my experience and I’ll certainly be getting one right after completing an undergrad teaching preparation course, 800 hours of teaching experience, and… yeah… not going to happen anytime soon. I can’t even find if there’s an accredited program for getting a computer science teaching certificate (maybe at Regis? What a convoluted PDF).

As far as I can tell, no teachers at bootcamps have teaching certificates (let me know if you do!). Does this mean we’re all those silicon valley jerks? Yes, and that sucks for the students and it sucks for the companies that try to hire out of boot camps like mine. Lots of people argue that boot camps should be regulated by the government and I agree with them and I think @LiaSae would too. But then, I would be out of a job and 99.99% of bootcamps would shut down.

(Edit from 2020: it seems like this was a good idea. I should have taken a harder stance on this, since it does seem like a lot of fly-by-night bootcamps popped up and quickly shuttered. The one I worked for, I believe they were doing a lot of the right things, but the market quickly saturated for junior developers and we started not being able to get people hired. After I quit – taking a job with a 30% salary bump – the bootcamp I had worked for was able to attract enough students and shuttered.)

It’s hard enough convincing programmers to take a pay cut and work at a boot camp. It’s even harder to do that to programmers who have the prerequisite charisma to teach, since that probably means they have the prerequisite charisma to climb a corporate ladder. All of my favorite teachers have done the job because they love inspiring students and because they love teaching.

I taught programming at the School for Human Rights in Brooklyn, NY, through the TEALS-K12 program. The TEALS program takes tech professionals and places them in morning classes – usually four volunteers to a class – and has them teach programming before they go in to work. They teach not only the students, but also the teacher, with the hope that the teacher will be able to teach the class on their own after 3-4 years of instruction.

None of these volunteer teachers are regulated or licensed, though they are trained. The year I participated, they also expected us to come up with the assignments and lessons for the class. It went… poorly. (Based on our feedback, they’ve hired curriculum developers and really flushed out the materials. Based on their new materials and their progress, I highly recommend volunteering through them if you’re interested in tech education but you can’t quit your job just yet.)  I was lucky that I’d done amateur curriculum design before, and one of my co-teachers had been a licensed chemistry teacher for years. That said, if they tried to hire only programming professionals who had teacher licensure, they’d have just a handful of schools in their program instead of 161.

I think this is a great example of Silicon Valley Jerks who know nothing about education really trying to make a difference. Is it the best teaching experience for the students? No. I certainly floundered a lot when I was learning to teach. But it’s certainly better than no technology education at all due to a lack of resources.

(Note from 2020: The more I learn  about how schools are funded by property tax to specifically benefit rich folks’ schools, the more I strongly believe we need more fundamental reform than just sending Silicon Valley Jerks like myself to go teach at underfunded locations. But it’s a good bandaid in the meantime. You need to treat the symptoms while you treat the cause.)

I’m not sure where I stand on the issue of teacher licensure in computer science and I would love to incite a discussion on the issue. Pros mostly involve better oversight, better instruction technique, and more consistent curricula. Cons involve not being able to move fast and pivot, even more undersupply of teachers, and less learn-to-code startups. Imagine if Salman Khan had to get a teaching license first.

(Note from 2020: I’m back to teaching, but this time I’m actually titled as an “Adjunct Lecturer” at Howard University through a program at Google, where I work now. It’s the same thing: send us Silicon Valley Jerks into programs to help build out the pipelines. We need to do this and ALSO address the underlying economic and prejudice problems that lead to this, by rethinking how we do things like students loans and school funding. I originally had some more topics to cover here, but I think what’s above is good on its own.)

From the Upper Peninsula of Michigan,

Erty

Awkwardness, Wavelengths, and Amplifiers

I’m an introvert, and although I can pass as an extrovert in certain situations (like being in front of a crowd, or when hanging out with people I know well), I still have a problem with small talk. I’ve run into this problem a couple of times over the last few days, where a conversation I’m having with someone suddenly dries up. I often make it worse by sighing or shifting my gaze downward. There’s just nothing to say for a moment and we (this happens in one-on-one and small groups, mostly) just sit for a while and steep in the silence.

Often we get the conversation going again (I usually try to ask about hobbies, goals, work, etc) but it’s a painful reminder that I’m not great at keeping a conversation going.

But there are some people who I seem to be able to talk for a long time with, at length about topics, and some external force has to intervene to end the conversation. I, of course, try to make friends with these people and hang out with them often, but occasionally it happens with a stranger. What mysterious force is it that suddenly makes me able to hold a deep and intelligent conversation with someone, without having to resort to “small talk”? I was thinking about this on the subway home after a party tonight and framed it in an interesting way, that I thought might make a good essay.

Note, I don’t want to claim I’ve “discovered” anything or that this is “the way”, I just want to explore this idea and would love feedback on it.

People have certain interests, and various intensities of these interests. I might call these “wavelengths” – a frequency (topic) and an amplitude (depth of knowledge / interest) that people carry a multitude of. I, for example, could talk to you at length about webcomic publishing, or perhaps the 1987 roguelike computer game Nethack, or how everything about the Scott Pilgrim movie was perfect except for Michael Cera. All of these things I have factoids, opinions, and perhaps most importantly and interest in discussing.

If you ask me to talk about gasoline cars, or maybe the Kardashians, or football, I have a thought or two but you’ll quickly discover that I’m not “on that wavelength.” The conversation can’t last long because I don’t have much to contribute. I’ll say, “hmm, interesting” and listen to you and be happy to learn some things, but I won’t have anything real to contribute. And so unless you’re very passionate about the topic, the conversation will soon end and I’ll make an excuse about having to refill my drink and wander off to find a new conversation. Which is fine, I bet you don’t want to be in this staring-at-the-floor-contest any longer either.

There are also some real dampeners, which one should seek to avoid. Some people don’t like to talk about some things for real reasons, and it’s not kind to force them onto those topics.

And so striking up a conversation is a frequency-searching exercise. What do we have in common enough to talk about. Work, sure. The weather, sure. Complaining about the MTA, sure. But those things aren’t (usually) the kind of things that get people really excited. And sometimes they’re dampeners, when someone is having a bad time at work and you ask them how work is going. But it’s difficult, since the things people really like to get into the weeds about are often obscure, and there’s a strange pressure against just opening conversations with, “hey are you into Nethack?” unless there’s some reason, like I saw you playing Nethack. I think it’s a failure thing; if I get all excited, “oh, are you a Nethack fan?” and the response I get is, “what’s that?” then I know I’m in for giving an explanation, which isn’t the same as a conversation.

Which of course is one of the reasons that the internet is so neat. I can just click some buttons on my $2000 facebook machine and get instantly connected to a large group of Nethack fans. Sometimes these online conversations spill over into real life. But often the Venn diagram of people I hang out with IRL and the people who are in these online groups is two circles. The fact that we can find these “lifestyle enclaves” (see Habits of the Heart by Bellah et al) of people on the same wavelength can also be dangerous echo chambers.

But the best, the best thing is when you run into someone who is an amplifier on your wavelength. My partner is like this for a lot of things, where we both get excited about something and end up being able to talk about it for a long time. And I have a friend who is like this for technical things – once we start coming up with tech and business ideas it’s very difficult to stop.

But to do this, your wavelengths have to be similar, and just like music there have to be other notes – other wavelengths that you can bounce off of to add interest to the conversation without it falling flat. And these amplifiers are rare. You know them when you find them and you hold on to them. They’re people who hear your ideas and “yes and” them, sending the wavelength back to you, but louder. You’re safe to explore here. You can even dig around for new wavelengths together, since you can always return to your common ground if nothing turns up.

There are some people who seem to be able to frequency-hop easily. It’s practice, I know, but I’m not that good at it. And as an awkward nerd-human I’m terrible about hiding when I’m uninterested in a wavelength, I quickly lose interest. My partner is great at this – she has the ability to work with people across a much wider variety of interests and be (or at least seem) interested in what they’re saying, and carry on a conversation. This is a skill I’m still working on, but it is a skill that can be practiced.