6 Quick Tips for Presenting Code in Visual Studio

As a frequent conference presenter and attendee, I often see live code walk-throughs. There are a variety of tweaks you can make to optimize the Visual Studio experience for presenting code to others. It’s important to minimize visual distractions, size text properly, and practice your demo in a low resolution.

To that end, here’s 5 quick tips for presenting in Visual Studio.

1. Run Full Screen

Hit Alt+Shift+Enter to toggle full screen mode. This hides the Windows taskbar, title bar, and many items in the Visual Studio UI such as Solution Explorer. This is especially helpful when presenting on low resolution projectors. As a side-note, this mode is also useful for daily use since it helps avoid distractions while coding by hiding all other OS elements.

2. Configure Full Screen Mode

Visual Studio full screen mode hides all explorer windows by default. You may decide to show certain items to assist with the demo. Simply re-enable displaying selected portions of the UI while in Full Screen mode and Visual Studio will keep track of your separate full screen settings. So once you select UI elements you’d like to display in this mode, it will remember from now on. Handy. (also, remember you can open files by hitting Ctrl+, and typing the name, so you likely don’t need to display the Solution Explorer).

3. Enable Presenter Mode

Presenter Mode is a simple feature in Productivity Power Tools. Once you’ve installed this extension, all you have to do is hit Ctrl+Q to place the cursor in the quick launch input, then type the word present. Select Present On/Present Off to toggle. You’ll find font sizes throughout the UI are increased.

Enable Visual Studio Presentation Mode via Quick Launch

Enable Visual Studio Presentation Mode via Quick Launch

4. Resize Further as Needed

In larger rooms with smaller projection screens, it may still be difficult for people in the back to read code, even with presenter mode enabled. In this case you can increase the font size of individual files by holding Ctrl and scrolling or via the zoom dropdown in the bottom left-hand corner of Visual Studio. When using the above techniques, this is rarely necessary, which is good because this setting doesn’t “stick” between files. If you need a more global solution, you can increase DPI scaling in Windows, though you may notice sizing quirks in any apps that haven’t been updated to support DPI scaling.

5. Zoom it

Zoom It is a free tool that allows you to zoom in and annotate specific portions of the screen. Simply hit Ctrl+1 to zoom in on a specific portion of the screen. This can help draw attention to an area and is also useful when a piece of UI outside of Visual Studio can’t be scaled by other means. I’ve even seen some presenters utilize Zoom It throughout an entire presentation, though I find this a bit disorienting.

6. Use the Light Theme

The standard light theme of dark text on a white background is typically easiest to read on projectors. The lower contrast alternatives look great on a nice LCD, but often lack sufficient contrast for projector use. (Credit to @craigber)

Have other tips for presenting code? Please chime in below.

Two Quick TFS Performance Tips

Team Foundation Server (TFS) continues to improve, but one area I’ve struggled recently is performance. I work in a very large codebase that knocks up against the 100,000 file limit with a single branch (yes, that’s a smell of bigger issues). Anyway, here’s two quick TFS performance tips that may help you be more productive.

1) Create Separate Workspaces for Each Branch

Annoyed that the TFS Source Control explorer is slow? The culprit may be too many files in a single workspace. The solution is simple. Create a separate workspace for each branch you’re working with. Name each workspace descriptively so you can easily switch between them. Having multiple mapped branches in the same workspace will make TFS Source Control Explorer extremely slow if you have over 100,000 files in your workspace (we hit this cap after a single branch). You can manage and rename your workspaces in Visual Studio here:

TFS Workspaces

2) Consider not creating a feature branch at all

If you’re working by yourself on a feature. you can simply work off main and create a shelveset each day to save your work until you’re ready to commit to the main line. This avoids the time-consuming overhead of branching. Just think about the list of overhead you take on with a feature branch:

  • Create branch
  • Get the entire repo again
  • Burn a ton more space on your hard drive due to duplicated physical files
  • Keep the branch updated via merges from main
  • Merge your changes back to main later
  • Switch between your branch and main so you can fix bugs in the main line

That’s a lot of pain that may not be necessary. Assuming you’re working by yourself, the only potential downside to using shelvesets off of main that I see is this: Bug fixes in main may be tricky if your new feature changes impact code related to a reported bug in a previous version. That’s a minor downside that I find worth accepting for all the time it saves by avoiding the overhead listed above.

Shadow DOM vs iframes

I’m really excited about the new HTML5 Web Components Standard. The Shadow DOM is particularly interesting, as it finally gives us encapsulated markup and styling. This should radically decrease the complexity of our CSS and help us finally design and deliver reusable components that don’t conflict with one another. We now have the tools to design our own custom HTML elements that feel native and offer even greater power than today’s HTML elements.

However, at first I had to wonder: Why do we need the Shadow DOM when we already have a way to encapsulate markup and styles using iframes? It’s an interesting point. Today iframes are commonly used to assure separate scope and styling. Examples include Google’s embedded maps and YouTube videos.

However, iframes are designed to embed another full document within another HTML document. This means accessing values in a given DOM element in an iframe from the parent document is a hassle by design. The DOM elements are in a completely separate context, so you need to traverse the iframe’s DOM to access the values you’re looking for.

Contrast this with HTML5 web components which offer an elegant way to expose a clean API for accessing the values of custom elements. Well written web components that utilize the Shadow DOM are as easy to access and manipulate as any native HTML elements. Try accessing the value of an input that exists in iframe from the parent document. It is a painful, clunky, and brittle process in comparison.

Imagine creating a page using a set of 5 iframes that each contain one component. Each component would need a separate URL to host the iframe’s content. The resulting markup would be littered with iframe tags, yielding markup with low semantic meaning that is also clunky to read and manage. Oh, and you’d also have to deal with the pain of properly sizing each iframe.

In contrast, web components support declaring rich semantic tags for each component. These tags operate as first class citizens in HTML. This aids the reader (in other words, the maintenance developer).

So while both iframes and the shadow DOM provide encapsulation, only the shadow DOM was designed for use with web components, and thus avoids the excessive separation, setup overhead, and clunky markup that occurs with iframes.

Excited about web components? Chrome and Opera already offer full support so get started!

Knockout Bindings are Evaluated Left to Right

I just resolved an odd behavior that tripped me up. Have you ever attached a click handler to a checkbox/radio with Knockout and wondered why the old value is received in the click handler? Well here’s the solution. In Knockout, bindings fire left to right.

So if you put your bindings in this order:

<input type="checkbox" data-bind="checked: value, click: funcToCall">

Then funcToCall will see the updated state for the checkbox as expected. However, if you reverse the binding order to this:

<input type="checkbox" data-bind="click: funcToCall, checked: value">

Then the click will fire before the checked binding which means the checkbox’s new state won’t be reflected. So be sure to declare your bindings in a logical order!

An Epic Week of Development in Norway

tldr; NDC was amazing! I was a guest on .NET Rocks! My recorded sessions from NDC in Oslo, Norway are below. And I finally got to meet Uncle Bob!

I just had an amazing experience at my first ever international conference. I’m back from attending the Norwegian Developer Conference in Oslo, Norway. It was an exceptionally well run conference with a few features I’ve never seen before. One exceptional feature was the overflow room which allowed people to watch 8 concurrent sessions simultaneously! Great for those times when you can’t find a seat or you’re not sure which session to pick.

And I couldn’t have been more excited to finally meet one of my programming heroes, Uncle Bob Martin. We had a wonderful chat after his session and I thoroughly enjoyed getting to talk shop 1:1 with an author I’ve looked up to for so long. Thanks Bob!

I also got to see Scott Hanselman speak in person for the first time. Absolutely superb edutainment! And I got to meet Douglas Crockford and attend his excellent sessions as well!

.NET Rocks!

To top it all off, Carl and Richard from .NET Rocks invited me over for my first guest appearance on the show! I’ve enjoyed the show for years and was absolutely flattered to finally be invited to be a guest. They’re a lot of fun and two of the nicest guys one could hope to meet. The show was on Single Page Application Development and we dove into the unique challenges of SPA development in the automotive industry.

Give the .NET Rocks show a listen here!

Oh Yeah, I Spoke Too.

Finally, my sessions at NDC Oslo went great! It was standing room only in both the sessions. A full room just makes speaking that much more fun by adding that extra spark of energy.

Crowd

Videos from both the sessions I presented at NDC are now up on Vimeo.

This is a subset of my Pluralsight course. If you’ve seen the Becoming an Outlier course, you’ll find this contains some unique content, especially at the beginning. If you haven’t seen the course, this is really just a preview since it’s less than half the full course content.

And here’s the session we chatted about on .NET Rocks. This is a case study on the largest single-page application of my career. Many lessons were learned along the way!

In Summary…

There were some touchy points along the way…

But I’m not sure I’ve ever learned more at a conference. I’ll save my gushing on the amazing sessions I attended for a separate post. I feel so lucky to have been a part of it all!

The TDD Divide: Everyone is Right

I’ve been enjoying the back and forth regarding the Death of TDD on the interwebs. The intellectual volleying between “legalists” like Robert C. Martin (Uncle Bob) and “pragmatists” like David Heinemeier Hansson (DHH) is nothing short of fascinating. I have a tremendous amount of respect for both these gentlemen for different reasons, and I can see wisdom in each of their views.

DHH argues that an excessive fixation on unit testing has added indirection, abstraction, and conceptual overhead.

Don’t pervert your architecture in order to prematurely optimize for the performance characteristics of the mid-nineties. Embrace the awesome power of modern computers, and revel in the clarity of a code base unharmed by test-induced design damage.

Unit testing indeed isn’t free and, as DHH argues, provides questionable benefit to offset this cost. Yet he’s fixated on test speed being the reason that we code to an interface. I agree that fast integration tests that hit the database are much more practical in the age of SSDs and fast processors. However, speed isn’t the only reason coding to an interface has merit.

We also code to an interface so that:

  1. We can easily switch out the implementation behind the scenes
  2. We can agree on the interface and have two separate teams handle each side of the interaction independently and concurrently.
  3. We can abstract away an ugly DB schema or unreliable third party.
  4. We can write tests first and use them to help drive the design. Uncle Bob and Kent Beck see this as a core benefit. DHH sees this as test induced design damage.

I again see the wisdom of both sides. TDD has been shown to reduce bugs and improve design. Yet every abstraction has a cost and must be justified. The T in TDD is *more code*. Be pragmatic. The fact is, in many kinds of software, occasional bugs are an acceptable risk. When they occur, fix it and move on.

Uncle Bob is fixated on craftsmanship, perfection, and centralized control. His brand is cleanliness and professionalism. The idea that the era of unit testing could feasibly be replaced by automated integration testing is unsurprisingly viewed as illogical heresy to existing thought leaders in the space.

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” – Upton Sinclair

Of course, this quote cuts both ways. Some have accused DHH of declaring TDD dead merely because unit testing is hard to do in the very framework he created: Rails.

Bottom line

One important thing to keep in mind: Uncle Bob sells consulting. DHH sells software. It’s a common divide. Software “coaches” like Uncle Bob believe strongly in TDD and software craftsmanship because that’s their business. Software salespeople like Joel Spolsky, Jeff Atwood, and DHH believe in pragmatism and “good enough” because their goal isn’t perfection. It’s profit. So if you want to build the most beautiful, reliable, scalable software, listen to the consultant. If you want to build a profitable product, listen to the salespeople too.

The world is a messy place. Deadlines loom, team skills vary widely, and the impact of bugs varies greatly by industry. Ultimately, we write software to make money and solve problems. Tests are a tool that help us do both. Consider the context to determine which testing style fits for your project.

Uncle Bob is right. Quality matters. Separation of concerns and unit testing help assure the utmost quality, speed, and flexibility.

DHH is right. Sometimes the cost of unit tests exceed their benefit. Some of the benefit of automated testing can be achieved through automated integration testing instead.

My take: Search for the wisdom in both of these viewpoints so you can determine where unit testing has merit on your project.


What’s your take? Chime in via the comments below or on Reddit Programming. Like this article? Submit it to Hacker News

AngularJS: The De Facto Standard for SPA Development?

One year ago, I started a large Single Page Application (SPA) project. I spent a few weeks Googling and biting my nails, trying to choose between the various options for SPA development. I considered four leading players:

  1. Knockout with Durandal
  2. Ember
  3. Backbone with Marionette
  4. AngularJS

A year ago this felt like anyone’s race. I ended up selecting Knockout with Durandal for the project since the combination offered clear support for older versions of IE. I currently develop apps for automotive dealerships and they’re notoriously slow to upgrade browsers. I enjoyed the development process with Durandal and Knockout and don’t regret my decision. There’s certainly no way I could’ve pulled off such a rich, interactive UI with solely jQuery.

Yet in the last year I’ve watched the tide shift heavily toward AngularJS.

AngularJS, KnockoutJS, Ember, Backbone

Angular’s momentum has become clear at various conferences across the country. Perhaps the most obvious recent example was at FluentConf in San Francisco:

That’s a pretty striking contrast. Now given, one could argue high conference session counts are merely a sign that Angular is a lot more complicated! But there’s no denying Angular has massive momentum. Enough to settle into a long-term #1 spot on Pluralsight’s top 100 courses. Angular even recently spawned it’s own its own conference: ng-conf.

Today marked another huge landmark for the Angular project: Rob Eisenberg, Durandal’s creator and director, announced that Durandal is being merged with Angular. He quietly joined the Angular team a few months ago. And, going forward, there will be only maintenance releases for Durandal 2. So, Durandal’s upcoming convergence with Angular makes this basically a three horse race.

Only a few years ago JavaScript developers abandoned MooTools, Prototype, and various other frameworks to declare jQuery the de facto standard. And today, Angular has such a clear lead in inertia that I’m comfortable with this declaration:

Angular has become the de facto standard for SPA development.

Why Has Angular Become so Popular?

I see three simple reasons:

1. Google Support – Sure, Google has a long history of killing promising projects. Remember Wave and Reader? But they’re a corporate powerhouse which lends a great deal of credibility to the project. Tell your boss you want to use a free framework from Google. That’s an easy sell.

2. Integrated and Opinionated – I enjoy working with Durandal, but ultimately you must rely upon multiple technologies to create a complex SPA using Durandal including Knockout for binding, RequireJS for dependency management, and a promise library (I went with Q.js). Backbone takes a similar approach – it’s not very opinionated so developers are free to make many architectural decisions on their own or to pull in libraries like Marionnette.

In contrast, Angular is highly opinionated and offers a single integrated solution. It’s analogous to the Unix and Windows model. In Unix you perform complex tasks by piping commands. The composition of many small targeted applications provides great power and flexibility. But the masses gravitate toward opinionated and integrated solutions like Windows. Why? A simple psychological principal called the tyranny of choice. In short, abundant choice often leads to misery. It’s easier to accept a few things you dislike in Angular than to be overwhelmed with decisions in other less opinionated libraries and frameworks.

3. Testability Baked-in – The Angular team has gone to great lengths to assure that Angular apps are testable out of the box. Testing JavaScript is notoriously painful given its lack of native dependency management options and the constant temptation to interact directly with the DOM. Angular’s focus on dependency injection and testability is refreshing. It creates a “pit of success” for developers.

The Future is Evergreen

I still have concerns about Angular. One must ask why Google is making this play at all. I believe it’s the same reason they built Chrome: Angular is Google’s new lever to push the web forward. That said, I worry that Google will push too aggressively on browser standards, thereby making the framework a non-starter for unlucky developers who must support older browsers. It’s clear Google prefers to err on the side of forcing the web forward. I admire this mission, but hate to see the developers supporting legacy browsers left with few viable alternatives.

Framework developers are starting to embrace a new ideal: Soon only Evergreen browsers will be supported by modern frameworks like Angular. This term, coined by Paul Irish has a simple premise: Evergreen browsers auto update. Most of today’s current browsers are Evergreen including Chrome, Firefox, Safari, and IE10+. This trend is great news for web developers who will soon be able to enjoy the power of ECMAScript 6 and the shadow DOM with Web Components. But it means customers should be warned now. Select an auto-updating browser ASAP – The web is surging ahead and Google will use AngularJS to lead the charge.

Chime in on Hacker News, Reddit or in the comments below.