Scott Hanselman Teaches AI | Customers, Etc.
A master class on teaching and helpful introduction to generative AI
Early in my career, before I started working in tech—this would have been 2008 or 2009—I decided that I wanted to get a job as software developer. As a self-taught coder equipped with a degree in philosophy, I had no idea how to do that. I tried everything I could think of: I read blogs, listened to podcasts, attended local meetups, and started following people on this new website called Twitter. One of those people was Scott Hanselman.
Scott worked for Microsoft. I’m not sure if I ever knew what his actual job was, but I knew he gave the most amazing demos. If Microsoft was coming out with some new technology and Scott was giving a demo, I wanted to watch it.
It’s probably been a decade since I’ve seen Scott Hanselman give a demo—my career went in a different direction and I haven’t been following Microsoft developer tools terribly closely—but recently he tweeted about giving a demo about AI and I decided to watch.
My mind was blown and I was reminded what an amazing teacher he is.
I want to gush about what I love about Scott’s teaching style, but before we go any further, I need to share the actual demo that Scott Hanselman and Mark Downie gave at Microsoft Ignite 2023.
And If you’re skeptical about whether it’s worth your time, just watch the first ten minutes, which isn’t about software developer tools at all, but is an extremely helpful explanation of how generative AI works and a perfect illustration of Scott’s teaching style.
I started following Scott Hanselman and listening to his podcast, Hanselminutes, because he was in a sphere of technologists that utilized or promoted tools in the Microsoft developer stack. Having cut my teeth on VBA—the programming language that can be used to automate Microsoft Excel and Access—I wanted to learn as much as I could about the latest and greatest coming out of Microsoft. Scott was a great conversationalist and I always walked away feeling like I had learned something.
(In 2009, Scott interviewed Joel Spolsky at Fog Creek’s office in lower Manhattan. I loved Joel’s blog and read everything he posted on joelonsoftware.com. A year and half later, I had moved to New York and begun working as support engineer at Fog Creek in the very same office where that interview had taken place.)
Nothing too basic
The thing I love most about watching Scott give a demo is that there is nothing too basic for him to talk about, or more properly, for him to teach. That’s what he’s doing. He’s teaching.
In a lot of software demos, or when talking about technology in general, we like to race to the big reveal, the magic, the thing that’s going to make us go, WOW! But Scott always gives time (and genuine enthusiasm) to the basic, yet foundational aspects of knowledge. When you walk away from the demo, you not only feel amazed, but you feel like you really understand what he was talking about.
Take for example the very beginning of the AI demo, at about three minutes in. He goes into an OpenAI playground, giving a very high level overview.
He’s talking to a room full of developers and technologists. He could assume you already know how generative AI works, but he doesn’t. He wants to make sure the whole class has as shared common knowledge before he continues.
If you watch this section, you’ll notice he spends a significant amount of time on this seemingly basic (yet foundational) concept. Watch when he enables the “Full spectrum” of probabilities and starts typing out an example:
“This is effectively AI as Family Feud. Steve Harvey’s here. It’s a beautiful day, show me… Beach! Show me… Park! What’s the right answer? Let’s find out.
…
Is that answer right?”
I love the way he teaches this. By using the analogy to Family Feud, he’s able to break down the complexity of generative AI into a simple concept that everyone can understand. And he’s not just saying it; he’s showing it on the screen, zooming in, and drawing a big red box around it so it’s impossible to miss. I love it.
Always guiding your focus
Even a decade later, I still remember how much Scott relied on the ability to zoom in on the screen and focus the audience’s attention1. I laughed a little when he gently reminded Mark about the Windows shortcuts that let you fully take control of the demo. For Scott, it’s second nature, but so many demos remain so focused on what the presenter thinks is cool that they forget to put themselves in the audience’s shoes. (Not that Mark was doing that—he may just not yet be at Scott’s level when it comes to those shortcuts).
When you’re giving a demo, it’s easy to forgot that the audience is only going to process at most a quarter of the screen real estate you have in front of you. They’re just too far away. So if you want them to see something on the screen, you have to literally zoom in and show it to them. Otherwise it will be lost. (This is true for slides as well. All those bullet points? Lost.)
Meta, a second story woven under the first
Watching Scott, you can tell that teaching comes effortlessly to him. That’s not to say he doesn’t have to put effort into his preparation, but by the time he gets on stage, he knows exactly what he’s doing and is very comfortable delivering his message.
What absolutely blows my mind, though, is how he’s able to weave a subtle secondary narrative into his entire demo. I’ll extract a few moments and quotes and then comment below.
First, the shirt he’s wearing, as explained by ChatGPT:
The shirt Scott Hanselman is wearing features a reference to a famous episode "Darmok" from "Star Trek: The Next Generation". The text "Darmok and Jalad at Tanagra" on the shirt refers to a metaphorical language used by an alien species in the episode, where they communicate through allegory - using mythological and historical references to convey meaning. The episode focuses on the difficulty of communication between different cultures and the effort needed to understand each other. In the context of the shirt, it's likely a nod to geek and programmer culture, as well as to the themes of communication and understanding in technology and software development, which are very relevant to discussions about AI like GitHub Copilot.
I love how he sets up the very technical concept of generative AI being a new user interface before sliding into the quasi-ethical (or dare I say religious) questions about how we interact them:
“People are saying that it’s going to change everything, but we’re not really sure how it’s going to change everything. Now the important thing to understand is that we’re just getting started. This is a new user interface. We are spending time figuring out where buttons go on the screen. We also need to figure out ‘how do you talk to your computer?’ It’s not a person. Should you talk to your computer? Should you treat it like a human? Should you be kind to it? Should we name them?”
In this part, he shares one of the basic ethical conundrums of training AI on the entire internet. You have to listen (at about 7:50) to hear the inflection in his voice when he says, “This is not a joke.” Although he moves on from the point rather quickly, I wonder if this meta point wasn’t intended to be the point of his entire demo?
“And in this case, the OpenAI very large language model is trained on all the text on the internet. And this is very important, because the internet is 49% a joy, and 49% pure evil, and 2% that haven’t decided which side they’re on yet. And what’ll happen is if you are cruel, if you are unkind, if you are mean, if you are impatient, if you say mean things to the AI, the next word will be mean, cruel, and unkind. This is not a joke. And it is still not a person [he smiles slightly]. But if you say nice things, if you complement the AI, you’re going to end up pulling from the nice parts of Stack Overflow, and Google, and Bing, and the places where people are pleasant.”
As they start getting into the actual demo, they hop over to Mark’s blog to find an example post they can use for their demo. Neither of them mention it in the demo, but it’s not a coincidence that the post Mark chooses is titled, “We shape our tools thereafter they shape us”, the quote from Father John Culkin, SJ, which is often attributed to famed media theorist Marshall McLuhan. Fitting, right?
Throughout the demo, Scott makes a point to weave in how Microsoft Copilot takes a different philosophical approach than out-of-the-box ChatGPT.
“A lot of us, in our interactions with ChatGPT, when we see demos, and everything is live. None of this canned. To be clear, we don’t do canned demos. … is, you ask the thing a question, and it gives you an answer, and then you marvel at how smart the AI is. This is not that. The AI is not smarter than us. It’s our helper. It’s a Copilot, not a Pilot-Pilot.
Another subtle thing to notice is how Scott often weaves in his points while Mark is getting something ready on screen. As an audience member, you’re rarely left waiting.
“So this user interface experience that you’re developer with Copilots is not about the tech. It’s about the humans. It’s about the decisions. It’s about the responsibility and intentionality about what we’re going to do. So you at the Copilot team are deciding ‘what should a pair programmer know and how should it behave?’”
He’s constantly drilling what differentiates Microsoft’s approach to AI:
“We want AI to be more like Ironman and less like Ultron…. I want a Copilot that is going to have a conversation with me. It’s going to be patient. It’s going to be kind. It’’s going to be helpful. And it’s going to be that thing that I can turn to, that I can ask for advice. It’s almost like having an infinite technical book.”
They show in a demo what AI can do, but they also show that sometimes AI gets things wrong:
“And you’re going to get to realize that it’s another tool, and just like you go to that blog post of that one person that’s excellent and sometimes it works for you and sometimes it doesn’t, you should simply ask the question again.” - Mark
“It’s confidently wrong.” - Scott
We can’t turn everything over to AI:
“I’m having this conversation with my son as he’s been applying to colleges, and he keeps [asking] me if he’s allowed to run his college essay through ChatGPT, and I keep trying to remind him that ChatGPT has never gone to college before. You are the one going to college. We need to make sure that we are able to decide for ourselves what’s right and what’s wrong and having that critical thinking skill is super important. This is a tool. It’s a spanner. We need to decide what we’re going to do with it.”
And finally, because nothing has changed in over a decade:
“Google with Bing.”
Teaching
When you watch Scott Hanselman give a demo, you can’t help but be impressed by his teaching. This shouldn’t be surprising. Depending on which part of Scott’s website you visit, he’s quick to describe himself as “teacher” or “professor”. It’s who he is.
When you adopt a mindset of teaching, it’s not enough to just pick an important concept and regurgitate it to your students (as if figuring what’s actually important were easy!). You have to take the time to put yourself in their shoes, try to remember what it felt like to not know, and then invite them on a journey. And not just any journey, one that’s fun, exciting, and meaningful. Scott does that exceptionally well and is worth learning from.
He’s using ZoomIt. Thanks to @BranMacMuffin on Twitter for the reminder.