Author Archives: erty

CUDA – Four

I’ve been busy with other things, but I woke up early and decided to get some CUDA studying in. I did talk with the hiring manager for the position that I’m interested in, who (as I expected) clarified that I didn’t actually need to know CUDA for this position. I’m still interested, though I should focus more on the Leetcode-style exercises that are more likely to come up on the interivew.

That said, I haven’t been entirely ignoring this. I’ve been watching some 3Blue1Brown videos in my spare time, like this one on convolution. My calculus is definitely rusty (I don’t fully remember how to take an integral), but I’m mostly just trying to gain some intuition here so that I know what people are talking about if they say things like, “take a convolution”.

For today, I started by looking through the source of the sample code I got running last time. Thanks to the book I’ve been reading, a lot of the code makes sense and I feel like I can at least skim the code and understand what’s going on at a syntax level, for example:

__global__ void increment_kernel(int *g_data, int inc_value) {
  int idx = blockIdx.x * blockDim.x + threadIdx.x;
  g_data[idx] = g_data[idx] + inc_value;
}

Writing this mostly for my own understanding:

The __global identifier marks this as a Kernel – code that is called from the host but runs on the device. It takes in a pointer to an array g_data and an int inc_value. This kernel will be run for each element in the g_data array and each instance of the kernel will operate on the element calculated in idx. Each thread block of blockDim threads will have a unique blockIdx and each thread in that block will have a unique threadIdx. Since we are working on 1D data (i.e. a single array, and not a 2D or 3D array), we only care about the x property of each of these index variables. Then, we increment the value at index idx by the inc_value.

Ok, writing this up I think I have one question, which is about the .x property. The book explains that you can use the .x, .y, .z properties to easily split up 2D or 3D data, but also talks about ways to turn 2D or 3D data into a 1D representation. So are the .y, .z properties just “nice” because they allow us to leave 2D data as 2D, or do they actually allow us to do something that re-representing the 2D data as 1D data and just using .x doesn’t?

Ok, continuing on:

int main(int argc, char *argv[]) {
  int devID;
  cudaDeviceProp deviceProps;

  printf("[%s] - Starting...\n", argv[0]);

Start the main function and set up some variables, as well as letting the user know that we’re starting.


  // This will pick the best possible CUDA capable device
  devID = findCudaDevice(argc, (const char **)argv);

  // get device name
  checkCudaErrors(cudaGetDeviceProperties(&deviceProps, devID));
  printf("CUDA device [%s]\n", deviceProps.name);

Some questions here. What does it mean by “best”? Fortunately, the source for findCudaDevice is available to us. First it checks to see if a device is specified by command line flag, and if not, grabs the device “with “with highest Gflops/s”.

  int n = 16 * 1024 * 1024;
  int nbytes = n * sizeof(int);
  int value = 26;

  // allocate host memory
  int *a = 0;
  checkCudaErrors(cudaMallocHost((void **)&a, nbytes));
  memset(a, 0, nbytes);

Setting some variables first, but then we allocate some host memory. I was curious about cudaMallocHost. In the other examples I’d seen, host memory was usually created by just using malloc (or simply assumed to already be allocated, in the book). cudaMallocHost creates “pinned” memory, which is locked into RAM and is not allowed to swap. This allows us to use e.g. cudaMemcpy without the performance overhead of constantly checking to make sure that the host memory has not been swapped to disk.

I’m still not used to the C convention of handling errors via macros like checkCudaErrors instead of language constructs like try/catch or if (err != nil). It just feels like an obsolete way of doing error handling that’s easy to forget.

That’s all I had time for this morning, but it’s fun to understand more and more about this as I continue to learn!

CUDA – Three

I ran a CUDA program šŸ™‚

It was a rough experience šŸ™ƒ

Honestly, getting started with pretty much any programming language involves a lot of banging your head against the toolchain, and slowly untangling old tutorials that reference things that don’t exist anymore. Honestly, this was easier than some python setups I’ve done before.

I started with a pretty sparse windows installation. I keep my computers relatively clean and wipe them entirely about once a year, so all I had to start was VSCode and … that’s about it. I am lucky that I happen to already have a windows machine (named Maia) that has a GTX 2080, which supports CUDA.

I installed MSVC (the microsoft C++ compiler) and the NVIDIA toolkit.

Then I tried writing some C++, not even CUDA in VSCode and I couldn’t get it to compile. I kept getting the error that #include <iostream>was not valid. As I mentioned, I haven’t written C++ in about 10 years, so I knew that I was likely missing. I putzed around installing and poking various things. Eventually I switched out MSVC for MINGW (G++ for windows) and this allowed me to compile and run my “hello world” C++ code. Hooray!

Now I tried writing a .cu CUDA file. While NVIDIA provides an official extension for .cu files, and I had everything installed according to the CUDA quick start guide, But VSCode just did … nothing when I tried to run the .cu file with the C++ CUDA compiler selected. So I went off searching for other things to do.

Eventually I decided to install Visual Studio, which is basically a heavy version of VSCode and I don’t know why they named them the same thing except that giant corporations love to do that for whatever reason.

I got VS running and also downloaded Git (and then Github Desktop, since my CLI Git wasn’t reading my SSH keys for whatever reason.

Finally, I downloaded the CUDA-samples repo from NVIDIA’s Github, and it didn’t run – turns out that the CUDA Toolkit version number is hard-coded in two places in the config files, and it was 12.4 while I had version 12.5. But that was a quick fix, fortunately.

Finally, I was able to run one on my graphics card! I still haven’t *written* any CUDA, but I can at least run it if someone else writes it. My hope for tomorrow is to figure out the differences between my non-running project and their running project to put together a plan for actually writing some CUDA from scratch. Or maybe give up and just clone their project as a template!

 

CUDA – Two

I have an art sale coming up in three days, so Iā€™m spending most of my focus time finishing up the inventory for that. But in my spare time between holding the baby and helping my older kid sell lemonade, Iā€™ve started exploring a few of the topics Iā€™m interested in from the previous post.

Convolutions

Something I was reading mentioned convolutions, and I had no idea what that meant, so I tried to find out! I read several posts and articles, but the thing that made Convolutions click for me was a video by 3 Blue 1 Brown. The video has intuitive visualizations. Cheers to good technology and math communicators.

Sliding a kernel over data feels intuitive to me, and it looks like one of the cool things about this is that you can do this with extreme parallelism. Iā€™m pretty sure this is covered early on in the textbook, so Iā€™m not going to worry about understanding this completely yet.

It seems like convolutions are important for image processing, especially things like blur and edge detection, but also in being able to do feature detection – it allows us to search for a feature across an entire image, and not just in a specific location in an image.

One thing I donā€™t understand yet is how to build a convolution kernel for complicated feature detection. One of the articles I read mentioned that you could use feature detection convolution for something like eyes, which I assume requires a complicated kernel thatā€™s trained with ML techniques. But I donā€™t quite understand what that kernel would look like or how you would build it.

Parallel Processing

I started readingĀ Programming Massively Parallel Processors, and so far itā€™s just been the introduction. I did read it out loud to my newborn, so hopefully heā€™ll be a machine learning expert by the time heā€™s one.

Topics covered so far have been the idea of massive parallelism, the difference between CPU and GPU, and a formal definition of ā€œspeed upā€œ.

I do like that the book is focused on parallel programming andĀ not ML. It allows me to focus on just that one topic without needing to learn several other difficult concepts at the same time. I peeked ahead and saw a chapter on massively parallel radix sort, and the idea intrigues me.

Differentiation and Gradient Descent

Again, 3B1B had the best video on this topic that I could find. The key new idea here was that you can encode the weights of a neural network as an enormous vector, and then map that vector to a fitness score via a function. Finding the minimum of this function gives us the best neural network for whatever fitness evaluation method weā€™ve chosen. It hurts my brain a bit to think in that many dimensions, but I just need to get used to that if Iā€™m going to work with ML. I donā€™t fully understand what differentiation means in this context, but Iā€™m starting to get some of the general concept (we can see a ā€œgood directionā€ to move in).

I havenā€™t worked with gradients since Calc III in college, which was over a decade ago, but Iā€™ve done it once and I can do it again šŸ’Ŗ. It also looks like I need to understand the idea of total derivative versus partial derivative, which feels vaguely familiar.

Moving Forward

Once the art sale is over, Iā€™ll hopefully have more focus time for this šŸ™‚ For now, itā€™ll be bits and pieces here and there. For learning CUDA in particular, it looks like working through the textbook is going to be my best bet, so Iā€™m going to focus some energy there.

From Grand Rapids,
Erty

 

CUDA – One

First, some backstory. I was laid off from Google in January and Iā€™ve taken the last six months off, mostly working on art glass and taking care of my kids (one of whom was just born in April, and is sleeping on my chest as I write this). Iā€™m slowly starting to look for work again, with a target start date of early September 2024. If youā€™re hiring or know people who are, please check out my rĆ©sumĆ©.

A friend of mine recently let me know about a really interesting job opportunity, which will require working with code written in (with?) CUDA. The job is ML related, so Iā€™ll be focusing my learning in that direction.

I donā€™t know anything about CUDA. Time to learn! And, why not blog about the process as I go along.

First step: come up with some resources to help me learn. I googled something like ā€œlearn cudaā€ and found this Reddit post on the /r/MachineLearning subreddit. It looks like Iā€™ll probably be learning a couple of related topics as I go through this journey:

 

CUDA

This is the goal. It looks like CUDA is a language + toolkit for writing massively parallel programs on graphics cards, that arenā€™t necessarily for graphics. Basically, making the GPU compute whatever we want. If we use this for, say, matrix multiplications, we can accelerate training of ML models.

Python and C++

C++ ? I havenā€™t written C++ since college a decade ago. I think I remember some of it, but Iā€™ve always been intimidated by the size of the language, the number of ā€œcorrectā€ ways to write it, and the amount of magic introduced by macros. I also donā€™t like the whole .h / .cc thing, but I suppose Iā€™ll just have to get used to that.

Iā€™m pretty good at Python, having written several tens of thousands of lines of it at Google, so Iā€™m not super worried about that.

PyTorch or TensorFlow

Some folks on the Reddit post linked above recommend a specific tutorial on the PyTorch website, which looks interesting. It seems like PyTorch is a ML library written in Python (based on Torch, which was written in Lua).

PyTorch is Meta, now under Linux. TensorFlow is Google. Both use C++, Python, and CUDA.

Matrix Math

In college, I was only briefly introduced to matrix math, and most of that exposure was a graphics course that I audited. Based on my brief reading about all of this, it seems like the major advantage of using graphics cards to train ML is that they can do matrix mathĀ really, really fast.Ā Itā€™s up to me to brush up on this while I explore the other things. I donā€™t yet have a specific study plan for this.

Parallelism

According to redditor surge_cell in that previously linked thread, ā€œThere are three basic concepts – thread synchronization, shared memory and memory coalescing which CUDA coder should know in and out of [sic]ā€. Iā€™ve done some work with threading and parallelism, but not recently. Most of my work at Google was asynchronous, but I didnā€™t have to manage the threading and coalescing myself (e.g. async in JS)

Resources

Ok – so, what am I actually going to do?

I browsed some YouTube videos, but the ones that Iā€™ve watched so far have been pretty high level. It looks like NVIDIA has some CUDA training videos ā€¦ from 12 years ago. Iā€™m sure the language is quite different now. I also want deeper training than free YouTube videos will likely provide, so I need to identify resources to use that will give me a deep knowledge of the architecture, languages, and toolkits.

First, Iā€™ll try to do the Custom CUDA extensions for PyTorch tutorial. See how far I can get and make notes of what I get stuck on.

Second, One of the Reddit posts recommended a book called Programming Massively Parallel Processors by Hwu, Kirk, and Hajj, so I picked up a copy of that (4th Ed). Iā€™m going to start working through that. It looks like there are exercises so Iā€™ll be able to actually practice what Iā€™m applying, which will be fun.

Finally, Iā€™ll try implementing my own text prediction model in ML. I know you can do this cheaply by using something like šŸ¤— (aka HuggingFace) but the point here is to learn CUDA, and using someone elseā€™s pretrained model is not going to teach me CUDA. Iā€™m optimizing for learning, not for accurate or powerful models.

Questions

Thereā€™s a lot I donā€™t know, but here are my immediate questions.

  1. I have an NVIDIA card in my windows computer, but I donā€™t have a toolchain set up to write CUDA code for it. Iā€™m also not used to developing C++ on windows, so Iā€™ll need to figure out how to get that running as well. I have a feeling this wonā€™t be particularly tricky, itā€™ll just take time.
  2. I have a lot of unknown unknowns about CUDA – Iā€™m not even sure what I donā€™t know about it. I think Iā€™ll have more questions here as I get into the materials and textbooks.
  3. It seems like thereā€™s a few parts of ML with various difficulties. If you use a pretrained model, it seems pretty trivial (~20 lines of python) to make it do text prediction or what have you. But training the models is really, really difficult and involves getting a lot of training data. Or, perhaps not difficult, but expensive and time consuming. Designing the ML pipeline seems moderately difficult, and is probably where Iā€™ll spend most of my time. But I need to understand more about this.

Thatā€™s it for Day One

If youā€™re reading this and you see something Iā€™ve done wrong already, or know of a resource that helped you learn the tools that Iā€™m talking about here, please do reach out!

From Grand Rapids,
Erty

ADHD / Atomoxetine

Edit: added some updates 2024-03-01

I’ve been quite open about the fact that I was diagnosed with ADHD in December of 2021, and in January I started taking Atomoxetine (brand name Strattera) for it. I think it’s important to talk about neurological health and mental health; the stereotype of guys not talking about mental health is true. When I revealed to my friends that I had ADHD, several of them confided to me that they had a similar diagnosis – which I didn’t know! I think if we’d all been open about it, the process would have been easier for me to navigate. So: I have ADHD and am happy to talk about it.

Everything else in here is Not Medical Advice. Talk to your doctor. I just want to share my experience.

I was *very* nervous about getting a prescription for ADHD medication, since I feel like I get addicted to things easily, and I’d heard that many of the drugs on the market (especially the stimulants like Adderall) were habit-forming. The idea of adding something to my life that would be difficult to remove later really scared me.

To theorize for a moment, I think that the availability of only the stimulants, and perhaps also a lack of data on correct dosages + lack of time release pills led to the “adhd zombie” stereotype that I saw a lot of in the oughts and tens when I was really struggling to exist in the neurotypical-centered school system, and really could have used the diagnosis. I was afraid of this “zombie” outcome, which is part of why it was only after lots of therapy that I was willing to attempt a diagnosis.

My psychopharmacologist ($10 word) told me that there are newer (well, since 2002, compared to Ritalin in 1944(!)) non-stimulant drugs for ADHD that are less likely to be effective, but without the habit-forming effects. With my doctor, I decided to try Atomoxetine (brand name Strattera), and I’ve been on it since January.

I only have a layperson’s understanding of how Atomoxetine works, but here’s my attempt: Atomoxetine is an noradrenaline reuptake inhibitor. This means that noradrenaline that I produce will linger in my system for longer, leading to an increase in my alertness level, but without artificially increasing my noradrenaline creation or release – just the rate at which it’s reabsorbed if not used.

The effect is that I no longer need to stim as much to keep alert and not bored. And as a dad of a three-year-old who wants to play doctor for the 1000th time, not needing new stimulation to avoid wandering away or becoming distracted allows me to stay focused on play with my daughter, which makes it incredibly worth it. Being able to focus at work is also good šŸ™‚ The medicine just makes it like, 30% easier to say “no” to distractions (e.g. phone, internet) which is enough for me.

My Experience with Vyvanse

After taking Strattera for a while but getting sick (lol) of the side effects, I finally got over my fear of Stimulants and tried Vyvanse (I can’t recall the dosage, but IIRC I started at half the normal amount, so, 10mg? 20mg?). My layperson’s understanding is that Vyvanse is basically Adderall (amphetamine salts) with a lysine amino acid attached. Your body metabolizes off the lysine at a certain rate, which then activates the stimulant. Because your body can only metabolize so much lysine at once, it causes the stimulant to slowly enter your system instead of all at once, making for a smoother experience.

At least, that’s what’s supposed to happen. Even on half the normal dosage, I ended up feeling incredibly high and light headed for the two days that I was on, and my memory and executive function were much worse than normal. I took a half dose and then nothing, but I still experienced a deep depression afterward for a few days.

I talked with a relative who also has ADHD and they said they had the same experience with Vyvanse/Adderall, so maybe we just metabolize it in a weird way? I’m sure stimulants work for some folks but they don’t seem to work for me. I did like that you could split the dose up or mix it in with food.

Erty’s Non-Medical-Advice guide to Atomoxetine/Strattera

This is my own experience based on about two years of taking Atomoxetine. The main side effects I got were nausea/lightheadedness, high heart rate, anxiety, and some acid reflux/heartburn. However I’ve mitigated most of them using the methods below.

Heartburn

The heartburn isn’t super bad and I can take tums for it. Sometimes it leads to a phlegmy throat, which goes away with tea and throat clearing. I’m not entirely sure that this is related to the Atomoxetine, it might just be that I’m getting old, lol.

Nausea / Anxiety

I think the nausea and anxiety are related to an empty stomach OR too much meds at once. My doctor started me at 40mg which I eventually figured out was way too much for me (For reference: I’m 130lbs) so I dropped to 25mg and felt much better while still feeling like I was getting the benefits of the medicine. I also found that I could resolve the nausea/anxiety by eating a large, protien-filled meal. For example: I would get the side effect if I ate a bowl of cereal (carbs), but not if I ate a bowl of cereal with a bunch of peanut butter on top (oils/fats/protein). I finally tried 10, and am now on 18 (just starting it!). So it took me a while to find the “right” dose.

I also think the nausea is worse at the beginning of the meds (or if you skip a few days). It seems like my body got used to the meds and figured out a homeostasis with them that didn’t involve nausea. Thanks, body.

The anxiety also seemed to crop up when I ended up with an empty stomach and meds still in my system. This was most pertinent when I would wake up in the middle of the night and feel “chemically anxious” – not anxious about anything in particular but just like, anxious. I eventually realized this was because my stomach was empty. I started eating a healthy midnight snack just before bed and that helped resolve it. I also started taking Magnesium Glycinate supplements which really helped reduce the anxiety.

High Heart Rate

I found that my heart rate would spike if I drank caffeine at the same time as my meds – like, a resting heart rate of 90 when it’s usually more like 55-65. In fact, I was able to completely drop my two-cups-a-day caffeine habit without any withdrawal. Nowadays I can have one cup of coffee in the morning without making myself too anxious, although I sometimes opt for decaf or half-caf.

Taking the Meds at Night

At the advice of my doctor, I tried taking the meds at night right before I went to bed. The idea was that the worst of the nausea happens a few hours after I take the pill, so if I’m asleep, I won’t notice. This worked ok! But it also made me wake up at 5am or so most nights. That … might be a good thing? But I don’t know, I ended up more tired later and my sleep schedule was sometimes biphasic which left me exhausted the next day.

Eventually I switched back to taking the meds in the morning / early afternoon.

Overview

I don’t like eating a big protein breakfast though, so my current (working for me) system is:
– Morning: Drink a small cup of coffee with a very light or no breakfast.
– Around noon: Eat a large lunch with some protien/oils/fats and take 25mg atomoxetine.
– Around 6pm: Eat a normal dinner.
– Around midnight: Eat a medium meal with Mg supplement and some sleepytime tea, fall asleep.

All in all, I’m quite happy with the meds. I don’t notice anything really different when I take them, but when IĀ don’t take them, I can tell that it’s much harder to get things done, and my partner especially notices. In fact, there are days where I’ve later said, “Oh, I forgot to take my meds yesterday” and she’s replied,”oh, that makes yesterday make so much more sense”. So obviously they’re doing something.

I hope this helps someone!

From Grand Rapids,

Erty

Mental Model Metacognition

I’ve been seeing a therapist recently, and it’s been quite nice to be able to take an hour to introspect my mental processes with help from a professional. I also enjoy the time set aside to focus on myself – I don’t feel like I’m taking over the conversation or being selfish with the time.

Aside: I’m not working through any particular issue or trauma in these sessions – a fact that reminds me how messed up the health care system is. Folks who would really benefit from therapy are unable to afford it, while my employment and insurance provide it to me for very cheap. I acknowledge that I’m very privileged in this way.

I think about the idea of a Mental Model a lot, both for myself (“what is my mental model of X”) and for others (“how can I teach Y so that the student has a good mental model”). Teaching, I like to say, is the act of building and debugging mental models in others.

When we interact with the world, we use our mental models to predict what will happen given certain conditions and actions. When we get a mental model wrong, it can be bad, embarrassing, or harmful. Having an incorrect mental model of how a stove works might cause a burn, for example.

Therapy like mine, then, is a way of airing out my own mental models. I show them to someone else and they give me feedback. In this way, I refine the mental models to make better predictions and live a better life.

A 2019 post by Alok Singh titled Mental Model of Dental Hygene got me thinking about the practice of publicly airing out a mental model. I think about Alok’s post often (a lot of time while brushing my teeth, natch), and applying this technique in my own life.

Journaling surely helps in this way as well – the act of organizing a mental model so that it can be written, and viewed as a whole, allows for a different kind of processing. But a journal also doesn’t provide feedback. There’s a certain risk that people take when sharing a mental model with the world. This is perhaps why a journal is kept private, and therapists have laws around confidentiality. The more risk you’re willing to take on, the more and wider feedback you can get.

But why is there a risk? Here’s my mental model (ha ha!) about what that risk is:

  1. A wrong mental model can be embarrassing
    1. People don’t like feeling like they’re wrong, and especially in ways that form a foundation for other thought. Revealing a mental model that’s wrong can invite scorn, teasing, and other humiliation.
    2. Sometimes, this causes people to double down on a wrong mental model instead of abandoning it.
  2. The mental model reveals a deeper kernel that’s shameful
    1. One might reveal a mental model that relies on an assumption that’s e.g. racist or sexist, which would cause them to lose respect or face repercussions.
  3. The mental model conflicts with a political, ideological, or commercial standpoint
    1. You may find that people are resistant – even physically – to a mental model being shared. Purely for example: Alok’s post might run afoul of folks who believe in conspiracy theories about Fluoride, or dentists who make money off of fixing cavities.
    2. I find myself often overweighting this risk. The odds are low but the penalty is high.

But perhaps it’s in the face of this risk that sharing a mental model becomes even more important – you can simultaneously retool your own process and at the same time influence someone else’s.

I sat down today to write out my mental model of Dopamine and focus but wrote this instead, which is apropos. I’ll have to follow up with another post.

From Grand Rapids,
Erty

 

Teaching Programming

Note: This is a post I drafted in 2017 and am publishing now in 2020.

My Background

I’m working as a tech educator now – my official title is “Lead Instructor”. I even have a super fancy business card:

business-card

This is after “years of industry experience” and “many hours of classroom instruction”, specifically volunteer work through TEALS-K12 teaching computer science in Brooklyn, work-study teaching Python to 7-12th graders during college, and teaching a self-designed actionscript/flashĀ curriculum to peers during high school.

I think a lot about education. I think a lot about quotes like @LiaSae’s because I agree with it. I believeĀ the current problems with the American school system are problems of economics, not education. The problem is that it’s the “Silicon Valley Jerks” like myself that have the wherewithal (read: money/privilege) to try crazy things like coding schools, but we don’t have the teaching background. Someone needs to combine the two.

Teaching Certificates and Regulation

The boot camp I work for is accredited by the state (CODHE). The application process is rigorous and it takes hundreds of hours to put together the documents. This ensures that CODHE has reviewed our curriculum and approved it. But, this means that if we come up with a “better” curriculum that we’d like to try, there’s no room to pivot. We would have to go through the approval process again, which nobody wants to do. Pros and cons.

(Edit from 2020: we actually could pivot. As far as I can tell there was very little oversight and so calling “audibles” to switch out materials or teach something different was totally fine, as long as we believed it would benefit the students. The state just wanted to make sure we were actually thinking through this stuff? The company is now defunct so I feel like I can admit this).

I don’t have a teaching certificate. As far as I can tell, the need for programming teachers is so huge right now that they’ve basically dropped that requirement. You also don’t need a teaching degree at a private institution like the one that I’m teaching at now.Ā I understand that this is a huge gap in my experience and I’ll certainly be getting one right after completing an undergrad teaching preparation course, 800 hours of teaching experience, and… yeah… not going to happen anytime soon.Ā I can’t even find if there’s an accredited program for getting a computer science teaching certificate (maybe at Regis? What a convoluted PDF).

As far as I can tell, no teachers at bootcamps have teaching certificates (let me know if you do!). Does this mean we’re all those silicon valley jerks? Yes, and that sucks for the students and it sucks for the companies that try to hire out of boot camps like mine. Lots of people argue that boot camps should be regulated by the government and I agree with them and I think @LiaSae would too. But then, I would be out of a job and 99.99% of bootcamps would shut down.

(Edit from 2020: it seems like this was a good idea. I should have taken a harder stance on this, since it does seem like a lot of fly-by-night bootcamps popped up and quickly shuttered. The one I worked for, I believe they were doing a lot of the right things, but the market quickly saturated for junior developers and we started not being able to get people hired. After I quit – taking a job with a 30% salary bump – the bootcamp I had worked for was able to attract enough students and shuttered.)

It’s hard enough convincing programmers to take a pay cut and work at a boot camp. It’s even harder to do that to programmers who have the prerequisite charisma to teach, since that probably means they have the prerequisite charisma to climb a corporate ladder. All of my favorite teachers have done the job because they love inspiring students and because they love teaching.

I taught programming at the School for Human Rights in Brooklyn, NY, through the TEALS-K12 program. The TEALS program takes tech professionals and places them in morning classes – usually four volunteers to a class – and has them teach programming before they go in to work. They teach not only the students, but also the teacher, with the hope that the teacher will be able to teach the class on their own after 3-4 years of instruction.

None of these volunteer teachers are regulated or licensed, though they are trained. The year I participated, they also expected us to come up with the assignments and lessons for the class. It went… poorly. (Based on our feedback, they’ve hired curriculum developers and really flushed out the materials. Based on their new materialsĀ and their progress, I highly recommend volunteering through them if you’re interested in tech education but you can’t quit your job just yet.) Ā I was lucky that I’d done amateur curriculum design before, and one of my co-teachers had been a licensed chemistry teacher for years. That said, if they tried to hire only programming professionals who had teacher licensure, they’d have just a handful of schools in their program instead of 161.

I think this is a great example of Silicon Valley Jerks who know nothing about education really trying to make a difference. Is it the best teaching experience for the students? No. I certainly floundered a lot when I was learning to teach. But it’s certainly better than no technology education at all due to a lack of resources.

(Note from 2020: The more I learnĀ  about how schools are funded by property tax to specifically benefit rich folks’ schools, the more I strongly believe we need more fundamental reform than just sending Silicon Valley Jerks like myself to go teach at underfunded locations. But it’s a good bandaid in the meantime. You need to treat the symptoms while you treat the cause.)

I’m not sure where I stand on the issue of teacher licensure in computer science and I would love to incite a discussion on the issue. Pros mostly involve better oversight, better instruction technique, and more consistent curricula. Cons involve not being able to move fast and pivot, even more undersupply of teachers, and less learn-to-code startups. Imagine if Salman Khan had to get a teaching license first.

(Note from 2020: I’m back to teaching, but this time I’m actually titled as an “Adjunct Lecturer” at Howard University through a program at Google, where I work now. It’s the same thing: send us Silicon Valley Jerks into programs to help build out the pipelines. We need to do this and ALSO address the underlying economic and prejudice problems that lead to this, by rethinking how we do things like students loans and school funding. I originally had some more topics to cover here, but I think what’s above is good on its own.)

From the Upper Peninsula of Michigan,

Erty

Awkwardness, Wavelengths, and Amplifiers

I’m an introvert, and although I can pass as an extrovert in certain situations (like being in front of a crowd, or when hanging out with people I know well), I still have a problem with small talk. I’ve run into this problem a couple of times over the last few days, where a conversation I’m having with someone suddenly dries up. I often make it worse by sighing or shifting my gaze downward. There’s just nothing to say for a moment and we (this happens in one-on-one and small groups, mostly) just sit for a while and steep in the silence.

Often we get the conversation going again (I usually try to ask about hobbies, goals, work, etc) but it’s a painful reminder that I’m not great at keeping a conversation going.

But there are some people who I seem to be able to talk for a long time with, at length about topics, and some external force has to intervene to end the conversation. I, of course, try to make friends with these people and hang out with them often, but occasionally it happens with a stranger. What mysterious force is it that suddenly makes me able to hold a deep and intelligent conversation with someone, without having to resort to “small talk”? I was thinking about this on the subway home after a party tonight and framed it in an interesting way, that I thought might make a good essay.

Note, I don’t want to claim I’ve “discovered” anything or that this is “the way”, I just want to explore this idea and would love feedback on it.

People have certain interests, and various intensities of these interests. I might call these “wavelengths” – a frequency (topic) and an amplitude (depth of knowledge / interest) that people carry a multitude of. I, for example, could talk to you at length about webcomic publishing, or perhaps the 1987 roguelike computer game Nethack, or how everything about the Scott Pilgrim movie was perfect except for Michael Cera. All of these things I have factoids, opinions, and perhaps most importantly and interest in discussing.

If you ask me to talk about gasoline cars, or maybe the Kardashians, or football, I have a thought or two but you’ll quickly discover that I’m not “on that wavelength.” The conversation can’t last long because I don’t have much to contribute. I’ll say, “hmm, interesting” and listen to you and be happy to learn some things, but I won’t have anything real to contribute. And so unless you’re very passionate about the topic, the conversation will soon end and I’ll make an excuse about having to refill my drink and wander off to find a new conversation. Which is fine, I bet you don’t want to be in this staring-at-the-floor-contest any longer either.

There are also some real dampeners, which one should seek to avoid. Some people don’t like to talk about some things for real reasons, and it’s not kind to force them onto those topics.

And so striking up a conversation is a frequency-searching exercise. What do we have in common enough to talk about. Work, sure. The weather, sure. Complaining about the MTA, sure. But those things aren’t (usually) the kind of things that get people really excited. And sometimes they’re dampeners, when someone is having a bad time at work and you ask them how work is going. But it’s difficult, since the things people really like to get into the weeds about are often obscure, and there’s a strange pressure against just opening conversations with, “hey are you into Nethack?” unless there’s some reason, like I saw you playing Nethack. I think it’s a failure thing; if I get all excited, “oh, are you a Nethack fan?” and the response I get is, “what’s that?” then I know I’m in for giving an explanation, which isn’t the same as a conversation.

Which of course is one of the reasons that the internet is so neat. I can just click some buttons on my $2000 facebook machine and get instantly connected to a large group of Nethack fans. Sometimes these online conversations spill over into real life. But often the Venn diagram of people I hang out with IRL and the people who are in these online groups is two circles. The fact that we can find these “lifestyle enclaves” (see Habits of the Heart by Bellah et al) of people on the same wavelength can also be dangerous echo chambers.

But the best, the best thing is when you run into someone who is an amplifier on your wavelength. My partner is like this for a lot of things, where we both get excited about something and end up being able to talk about it for a long time. And I have a friend who is like this for technical things – once we start coming up with tech and business ideas it’s very difficult to stop.

But to do this, your wavelengths have to be similar, and just like music there have to be other notes – other wavelengths that you can bounce off of to add interest to the conversation without it falling flat. And these amplifiers are rare. You know them when you find them and you hold on to them. They’re people who hear your ideas and “yes and” them, sending the wavelength back to you, but louder. You’re safe to explore here. You can even dig around for new wavelengths together, since you can always return to your common ground if nothing turns up.

There are some people who seem to be able to frequency-hop easily. It’s practice, I know, but I’m not that good at it. And as an awkward nerd-human I’m terrible about hiding when I’m uninterested in a wavelength, I quickly lose interest. My partner is great at this – she has the ability to work with people across a much wider variety of interests and be (or at least seem) interested in what they’re saying, and carry on a conversation. This is a skill I’m still working on, but it is a skill that can be practiced.

Name Change

When Greta and I got married, we joked that we were going to merge our last names (Dohl and Seidel, respectively) into a portmanteau, “Seidohl”. With our wedding date approaching and no better ideas, we happily went forward with that idea and made it our legal last name. This isn’t a guide (there’s a good one here) but really just a story.

For posterity, and also to hopefully instruct anyone interested in doing the same, I’ve decided to write down the processes we’ve gone through. Please let me know in the comments if you’ve had similar or different experiences!

Family

To begin: my parents, The Seidels, took my mom’s maiden name when they were married. As far as I can gather, they did it mainly because of my dad’s strained relationship with my Grandfather. (Only one child – an adopted second cousin – still bears the Pizarro name that was handed down by my Grandfather). Still, they apparently faced some hardship in changing my dad’s name officially, so I was expecting a tough time of making up an entirely new name.

Telling my family was not hard. Most of them agreed that it seemed a perfectly reasonable thing to do and actively encouraged us. My maternal grandfather (rest in peace) seemed surprised, although he was in good humor about receiving the news that I would be likely the last Seidel on his side of the family. He mentioned one great aunt who would be “spinning in her grave” about the news. (Connect her to a turbine?). I never met her and, to date, have not been visited by her angry ghost. So I think I’m good there.

YMMV, of course, with your own family.

Marriage License

In the county where we were married, we could only change our names to a) one of our existing last names or b) hyphenate our last names. (I can no longer find a reference to this, but Michigan’s marriage laws being as backwards as they are I wouldn’t be surprised). So for our actual marriage, we just c) kept our own last names.

New York

We moved to NYC right after getting married, and changed our names through the NY state court system. This was relatively straightforward. We filled out some notarized paperwork and got a court date. Note that finding a Notary Public can be difficult, even in Manhattan! We needed our original Birth Certificates as well.

We dressed up nice and appeared in front of a judge. The main questions we were asked related to figuring out if we were doing this to get out of a debt, crime, or other obligation. I remember that the guy in front of us was changing his name for religious reasons and the judge approved that as well.

The details of this next part are a bit fuzzy, since this was so long ago. The judge approved our name change and sent us to get certified copies. There was a (IIRC) $65 charge to change the names, and then each notarized copy cost about $10. We ended up needing five (?) certified copies, one for our records, one for a couple of services like the Social Security office, debts (student loans companies), and more to publish in the newspaper. You must publish your name change in a public newspaper.

You can publish in the New York Times, if you want to shell out a boatload of money. I published in, I think, the Irish Echo, which cost about $35. You don’t need to be Irish to publish there! It’s one of the cheapest papers to publish in so I expect they do a brisk business on this.

Finally, we got certified copies of our completed name change documents for our records (another $6 per copy, I think?). We used that to do things like refresh our passports.

Problems

The only institution that really gave me any grief was my bank. They seemed perfectly happy to accept that my wife’s last name had changed through marriage without any documentation (this seems like a major security flaw???), but as soon as I told them that we’d changed it in front of a judge, suddenly they needed me to send documents for both of us. I did, and they changed the names promptly.

Trying to change my frequent filer miles name on Southwest also caused problems. Their online name change form simply didn’t work, and none of the phone support people could do anything but tell me to go fill out the form. I think I eventually got around it but it required some developer console hacking??

Changing emails, usernames and websites was also tricky. I still have my old last name in some usernames. I was very fortunate that I’d chosen ertysdl as my email username, since sdl stands for both Seidel and Seidohl! I promise I didn’t plan that. Some sites seem to use your username as a unique identifier, and why would that ever change?

Conclusion

Changing my last name wasn’t a difficult task, although it was made harder by state law in Michigan which didn’t allow us to change our name at the time of our wedding, which would have saved us a lot of time and expense. Only a few entities gave me trouble about updating my name, but otherwise it seems like a pretty common thing to do and most of the clerks didn’t blink an eye – in fact, it seems like several people change their name every day in NYC, so the process is pretty streamlined.

Postscript: Thinking thoughts

I didn’t grow up with any strong connection to the Seidel name. It’s generally a German surname, and I’ve always wondered if the anti-German sentiment of the 1940s led to my earlier family suppressing that aspect of my heritage. I have a much closer affinity for Sweden, since I was partially raised by my dad’s mom who was born to Swedish immigrants. That said, I don’t really consider myself Swedish or have any connection to the country and its people other than that.

There aren’t many other Ertys in the world! I used to come up on the first page of google results with just my first name, but that seems to not be the case any more. Unfortunately, unique names come with downsides as well. There’s some weird art out there with my name attached to it (I didn’t make it!). However, with a unique first and last name, I end up being very Googlable. That’s something I decided was good?

To me, changing my name like this is an expression of the individualism and emptiness of the modern “white american” culture. I don’t have a connection to any large family or lineage through my names. I’ve changed both my first name (from Erik to Erty) and my last name (from Seidel to Seidohl), and I rarely use my middle name (and have considered changing it at times as well). A name is an outward expression of self. It’s like a tattoo. I don’t not like my original names, I’ve just found new ways of expressing myself. I think this is a form of rebellion against previous generations that put so much cachĆ© into names – let’s discard that and refer to ourselves how we want, not just on the internet. I’ve been lucky to not have familial pressure back on these decisions, so they’ve been almost no work at all.