Friday, September 27, 2024
Google search engine
HomeData Modelling & AISQLSaturday Charlotte: Lessons Learned

SQLSaturday Charlotte: Lessons Learned

We just wrapped up what have been the busiest, most stress-filled, yet rewarding couple of weeks in recent memory. Overall the feedback we’ve received about the SQLSaturday Charlotte event has been amazing, thanks to all for the kind words that have been tweeted and blogged thus far! I do think for the most part the event went off about as well as it could have despite our lack of experience planning technical conferences. However, we definitely learned a lot that we’ll be able to apply should we ever do it again. I had many questions when I first started working on this event, so I wanted to get my thoughts down while still fresh with hopes that it might be helpful for future SQLSaturday planners.

Scheduling Concerns

One of my primary tasks was trying to coordinate the flood of speakers and sessions we had for the event, a flood created by none other than CSSUG president and SQL Sentry partner channel manager Peter Shire, whose skills at promoting can only be described as P.T. Barnum-like. (I thought for sure that P.T.’s first name must have been “Peter,” but turns out it was “Phineas”). Peter did a phenomenal job securing the support of an amazing list of 35 speakers, and getting the word out to the SQL Server community to the tune of about 400 registered attendees, 280 of whom actually made it to the event.

The starting point for the schedule planning was the facility.  The Microsoft campus here is a fantastic facility in a great location close to the airport, downtown, etc., however, it wasn’t necessarily designed to run an event like this.  The initial room breakdown was:

Building AP1:
     Room 1: 200 seats
     Rooms 2,3,4,5: 20 seats each
     Cafeteria: 220 seats

Building AP2:
     Room 6,7,8: 50 seats each

The two buildings are about a 3-minute walk apart. Because of the relative size difference between the largest and smallest rooms, my biggest fears were putting a hot session in one of the smaller training rooms causing massive overflow, and putting a not-so-hot one of the big rooms… either of which can exacerbate the other. The original layout had us using the 4 training rooms in AP1, the same building with the cafeteria and the 200 seat room, however, about a week before the event we got word from Microsoft that we’d been bumped by some internal training and had to use 4 training rooms in AP2 instead. So that left us with:

Building AP1:
     Room 1: 200 seats
     Cafeteria: 220 seats

Building AP2:
     Rooms 2,3,4,5: 20 seats
     Room 6,7,8: 50 seats

The AP2 training rooms are an exact mirror of those in AP1, however it seemed to make for a very lopsided layout since it left us with a single large room in one building, and 8 rooms in the other building. This caused concern since we’d been counting on a lot of interplay between the 80-100 people in the training rooms and the 200-seat room where I’d put all of the “hottest” sessions. We didn’t understand why the other group couldn’t just use the AP2 training rooms, but eventually found out – they were doing lab sessions, and apparently the workstations in AP2 aren’t the same class as those in AP1. We weren’t using the workstations so it didn’t really matter to us, but apparently it did to them. 😉

As it turned out, it probably did end up causing lower attendance for the featured sessions and some overflow in some of the smaller rooms, mainly because some seemed reluctant to leave AP2 when there were so many other good sessions going on next right there. But did it even come close to ruining the conference? Certainly not.

Another issue with the change was that we didn’t get the new room names in time to put them on the signage, so we ended up using numbered rooms that were offset from the tracks by one – Track 2 was in Training 1, Track 3 in Training 2, etc. This was very confusing, even to me. At one point I even directed Wayne Snyder into the wrong room for his session. Sorry Wayne!

The first thing I did was try and categorize the 80+ submitted sessions so I could get a better handle on the actual areas of interest we had to work with. What I was really looking for at that point was whether or not we could have any focused tracks. I ended up with 4 major categories and 24 subcategories:

 
Category – Subcategory Session Count
Admin – Broker 2
Admin – Clustering 2
Admin – DB Design 2
Admin – Hardware 2
Admin – Maintenance 1
Admin – Memory 1
Admin – Performance 11
Admin – PowerShell 5
Admin – Replication 3
Admin – Troubleshooting 2
Admin – Virtualization 2
BI – Dev 5
BI – SSAS 1
BI – SSIS 5
BI – SSRS 4
Dev – DB Design 2
Dev – Performance 4
Dev – Tools 2
Dev – TSQL 2
Dev – XML 1
General – Career 2
General – General 4
General – Social 3
General – Tools 2

It was apparent that although for a couple like Performance and BI, focused tracks might work, but for others we only had one or two sessions so trying to do clean, interest-based tracks would be fruitless. I should mention that although for some sessions the categorization was easy, for others it was not clean at all, so I just took my best stab at it. Even so, there were a few others that seemed to defy categorization, like Kevin Kline’s “Top 10 Mistakes on SQL Server” and Sergey Pustovit’s “SQL Server 2008/R2 Overview,” and those ended up in “General – General.” 😉

I was also hoping that by having these categories, I’d be able to send them out to a few people that had planned other recent SQLSaturdays and ask them to order the list by most to least popular at their event. I got some extremely valuable feedback from those planners, but since they didn’t have accurate attendance numbers for all topics we didn’t get any rankings back. Since this info could have been invaluable to me and helped to avoid some of the session/room mismatches we experienced, I’m posting all of our attendance numbers here. Bear in mind they are only estimates and not exact. If you see anything that looks way off, let me know.

Next, to my sessions spreadsheet I added a few columns, and used them to come up with a universal rating measure to help decide where sessions should go. Here they are:

Hot Topic – Yes/No – Is the topic particularly popular right now, and are there a lot of people talking about it in the media? For example, I don’t think that anyone would argue that PowerPivot is a hot topic right now. If Yes, 2 points were added to the rating.
Premiere Speaker – Yes/No – Certain speakers have significant name recognition, and will draw attendees regardless of the topic.  If Yes, 3 points were added to the rating.
Appeal – This is a 1-10 point value added to the rating for how broad of an audience would possibly be interested in the topic, or the size of the base population. For example, PowerPivot may indeed be a “Hot Topic,” but if the potential base of users is small the appeal number will be lower.

I used the above to calculate a simple rating value, and the higher the number the bigger the room. Certainly not very scientific, but probably good enough for our purposes. Ultimately there are so many other variables involved, IMO if you try to be any more exacting you run the risk of skewing things too far one way or another. For example, if I’d actually tried to estimate the number of “BI” users or “Admin” users that would show in lieu of the general “Appeal” number, and my estimates for those populations were off, it could cause trouble. Now, keep in mind that this approach is really most applicable to a first event, where you really have no idea who is going to show up.  Next time we’ll have a much better idea of the different populations here in our region and will use actual numbers as a basis for predicting session popularity.

Another idea that came up was surveying people in advance to see which sessions or categories they are most interested in. If we had had more time, and/or if there was a capability to do this built into the sqlsaturday.com website, we likely would have made use of it. As it was, there was just too much else going on with the planning to even think about attempting something like that.

Below I’ll discuss a few of the bigger session surprises in a bit more detail.

Hotter-Than-Expected
TSQL
Specifically Mike Walsh’s “You Can Improve Your Own SQL Code” and Geoff Hiten’s “Bad SQL” both ended up in a smaller 20-seat room, and both could have easily been in a 50-seat room.

SSRS/SSIS
The 25-seat room we had set aside for these BI topics had 46 people crammed into it at one point. I didn’t put this track into one of the larger rooms mainly because I just wasn’t quite convinced it would compete with Performance, Virtualization, PowerShell, and other “hot” topics I had in the 50-person rooms. I also had a lot of people that know telling me that BI is always a smaller crowd at these things, so I went with our only “in between” space, the 25-seat conference room that we knew could handle up to 35 with some standing. Good call for not putting it in a 20-seat room I guess, but bad call for not giving it a 50-seater. If you were one of those that had to act like a sardine for an hour, my sincerest apologies, we’ll give you more room next time.

Not-So-Hot
Virtualization

This one shocked me.  With as much talk (and uncertainty) as there is right now about this topic and SQL Server, I was worried that a 50-seat room would overflow for Aaron Nelson’s and Denny Cherry’s sessions. I was way off. They would have both been fine in one of the 20-seat rooms.

SSDs (Solid State Drives)
Again, I was shocked when Kendal Van Dyke’s session wasn’t packed. Great, well-known speaker with what I thought would be a hot topic, but it would have been fine in a small room.

Data Compression
The two sessions we had on this topic were the most lightly attended of all, only 4-5 people in each. This was in a smaller room already, but I had expected to see more interest since it’s new to 2008 and disk space is always a concern, however the Enterprise-only caveat or the other concurrent sessions may have been a big factor. 

Too Much of a Good Thing?

The “good thing” in this case being PowerShell. I guesstimated that this would be a hot topic as it has only seemed to gather steam with DBAs over the past year or so, and with remoting and other cool features in Windows Server 2008 R2 and PowerShell 2.0 there’s a lot of new stuff to learn about as well. Turns out I was right… kind of. Aaron Nelson’s first two sessions in a 50-seat room were pretty much full, but Allen White’s two sessions immediately following were, well, not so much.  Again, it certainly wasn’t Allen as a speaker since he’s highly regarded and authoritative on this topic. However, I did hear from more than one person that two PowerShell sessions probably would have been sufficient for this crowd. I think Allen was a little disappointed, but being a glass is half full kind of guy, he said it was good practice for TechEd. Thanks Allen!

Premature Raffling

One of our biggest mix-ups of the day was the fact that we started the vendor giveaways right after the last session was supposed to end at 5pm. We still had a session going in the other building that went a bit long, so those attendees and the speaker missed out on the raffles. To make matters worse, that particular speaker would have won an iTouch had he been present!  Sincere apologies here. We felt bad, and he did seem a bit upset — I should have reminded him that he won a WinMo device at our very first SQL Sentry giveaway several years ago, although I’m not sure it would have helped at that point. 😉

Bottom line, ensure all sessions have finished and folks have had time to get to the raffle before you start. Apologies to anyone else that missed out on this – if you did, send us an email and we’ll send you either a SQL Sentry T-shirt, USB 2.0 hub, or iPhone cover.

Where’d the Speaker Go?

We had a particularly well-attended session on IO Performance… only problem is we had no speaker. I made a quick call over the two-way to registration to see if anyone had seen this particular speaker. They had not. I let the attendees know this, and they quickly filed out to look for another session, some with looks of disappointment. As it turns out, the speaker had cancelled a couple of weeks earlier, but apparently it got lost in the hundreds of other emails that were flying around about the event. Mistakes like this happen and stuff just gets missed sometimes… which is why there should have been a mechanism in place to catch this. Had we done a speaker roll check earlier in the day we would have caught it, and probably could have gotten another speaker to fill in, or if not, at least made attendees aware of the cancellation in advance. We just assumed all speakers were there – not safe with 35 speakers.

Bear in mind that some speakers may show up later if their first session isn’t until later in the day, so you may not want to check roll first thing in the a.m., but either way it’s generally an easy matter to at least confirm whether or not the speaker was in town.  In this case they were not.

Plan for Cancellations

Jorge Segarra (aka, @SQLChicken), one of the organizers for the Tampa SQLSaturdays told me to expect between 5-7 cancellations.  I thought that sounded high – how can that many speakers commit for something like this, then just cancel?!?  Well, he was right. There were many different reasons we heard, but what can you do. Fortunately we had enough speakers with enough sessions to make up for it. I think the 5-7 backups is a good number to expect for an event of this size, probably less for smaller events. If you saw a session on the schedule that seemed a bit out of place for the room or other sessions around it, it was probably a backfill from a cancellation.

A Final Word

It’s important to note that a lot of these scheduling dilemmas were due to the broad range of room sizes we had here.  If your event has 300 people and 4-5 rooms that hold 75 people each you’ve got a lot more margin for error, and you’re probably not going to need to go to this level of detail. Anyway, I hope some future SQLSaturday planners will find something here that’s useful. If any of you planners have any questions about what we did or why, I’ll be glad to help however I can. Please post them here or shoot me a DM or email.

A Huge Thanks to All

I wanted to take this opportunity to thank all of those that helped with the planning and organization for this great event. First, I wanted to thank Bill Walker, head of the SQL Server CSS team here in Charlotte, and his counterpart Lynne Moore for their immediate and unwavering support. From day one, they threw all of their considerable resources behind the event and without them it just wouldn’t have happened.  All of us were truly amazed at the forces they were able to mobilize so quickly to just get it done. I also wanted to thank Sergey Pustovit, Chris Skorlinski and Evan Basalik from the SQL Server CSS team for putting on some great sessions, and all of the other CSS members who volunteered their time whose names I don’t know – they ran the “SQL Clinic” booth and helped out in innumerable other ways.

Next I wanted to thank all of the people here at SQL Sentry who volunteered their time, in no particular order — Peter Shire, Karen Gonzalez, Nick Harshbarger, Brooke Philpott, Jason Hall, Natalie Wieland, Jason Ackerman, Steve Wright, and also Ken Teeter for the great photography.

Again in no certain order, sincere thanks go out to Jorge Segarra, Aaron Bertrand, Andy Kelly, Grant Fritchey, Tim Ford, John Welch, Rafael Salas, Geoff Hiten and of course Andy Warren for allowing me to pick their brains during the planning process. Not sure what I would have done without their input and words of wisdom.

And last but certainly not least, thanks to all of the speakers who came from near and far to be here. We are greatly appreciative that you decided to take the time to provide such a fantastic educational opportunity for our SQL Server community here in the Carolinas.

Greg is founder and Chief Scientist at SentryOne. He is a Microsoft developer by background and has been working with SQL Server since the mid 90s. He is perpetually engaged in the design and development of SQL Sentry, Plan Explorer and other SentryOne solutions for optimizing performance on the Microsoft Data Platform.

RELATED ARTICLES

Most Popular

Recent Comments