Entity Framework Code First Migrations in a Team Environment

The managing of a database schema has seen several advances over the years in the .NET stack. Circa 2010, Visual Studio began shipping with a new Database Project, and some SKUs included data compare tools, which helped compare data schema and the data itself. Around the same time, Entity Framework was launched, at first only offering support in  Model and Database First approaches, it was soon offered with the now, super popular, Code First approach.

The introduction of Code First was in reaction to people who wanted to keep their time in writing code and out of a database designer. So if we’re going to stay out of the database designer, how do we manage our database’s schema? Enter Migrations.  Code First Migrations can analyze your model and automatically create schema update files to get your database updated with your latest model changes. When I started using Migrations, I was in love. It was seriously time saving and super easy to use.

Soon enough, after touting, and bragging about how I found the new hotness, I introduced migrations on a project that I was working with a few other people on. The first week went…well…not good. I really thought this whole thing was gearing up for me to end up with some serious egg on my face.

The Pain, Oh The Pain

So, in order to understand why there was pain, we first need to understand how migrations work. Before doing any deep dives, I made assumptions. My main assumption that when I started with code first was that with  the ‘Add-Migration’ command, the magic going on was that the little minions inside were going out to my database and making notes about what’s different between the database and my code first model. I mean, this is logical, right, this is how we track schema changes using the Schema Compare tool in the past. Well, wrong-o. Code first doesn’t care about your database at all.

In reality, migrations are doing their comparisons against the last known state of the model. It does that this by making use of a snapshot file. You see, whenever you issue an ‘Add-Migration <some migration>’ command, migrations creates a snapshot of your entire model. So the next time you issue an ‘Add Migration <some other migration>’, migrations compares the current state of the model to the snap shot that was created in ‘some migration’. Seeing the problem yet?

So this all works fine and dandy for you. So now lets imagine that we have two developers Sarah and Roger. Sarah does some work, creates a migration, SarahMigration1, and checks in. Roger, gets latest, and sees Sarah’s migration, SarahMigration1. He begins doing some work and creates a migration of his own, RogerMigration1. So behind the scenes, migrations compares Roger’s changes against the snap shot and spits out a migration file. Sweet. But Roger isn’t done working and doesn’t check in his changes yet. Meanwhile, Sarah needs to create another migration, so she issues an ‘Add Migration SarahMigration2’ command.  So what happens? Migrations doesn’t know about RogerMigrationOne, or the snapshot that was created along with it, so migrations compares Sarah’s current model against the only snapshot it knows about, which is SarahMigration1. So a migration file is created that looks good, since Sarah knows nothing about Roger’s changes, her new migration looks and seems right. What what happened behind the scenes is what’s going to cause some problems. The snapshot that was created in Sarah’s last command, doesn’t include Rogers’s changes. So Sarah checks in and Roger gets latest. Since migrations lists migration by date, he will see SarahMigration1, RogerMigration1, and SarachMigration2.  Ok…seems ok. Oh, but it’s so not. When Roger goes to create his second migration, migrations is going to compare his current model against the latest snap shot, SarahMigration2, but the snapshot in SarahMigration2 knows nothing about RogerMigrationOne changes, so migrations is going to generate all of Roger’s changes, including the changes that are already defined in RogerMigrationOne.  Whoops.

The Solution

I like to call this the, ‘Oww, My Toes’ scenario. So the obvious solution might be, just delete RogerMigration1. While that might work in some scenarios, if you’ve updated your database, you’re going to be in a world of hurt if you do that…so don’t do that.

To fix this scenario, you have 2 options:

Option One: Generate A Blank Migration
  1. Write all of your pending model changes to a migration and update your database.
  2. Get latest from source control
  3. Issue an ‘Add-Migration <some name> -IgnoreChanges’ command

This method will result in a blank migration file being added to your codebase, but this solution is quick and easy, so the tradeoff of having blank files may be worth it. Note that this option will not work for you if you have already gotten latest before writing your pending model changes to a migration, so you may want to get into the habit of writing your changes to a migration before getting latest.

Option Two: Regenerate the Snapshot
  1. Find the migration before the one that matches your current database status.
  2. Issue command ‘Update-Database – TargetMigration [migrationbeforecurrent]’
  3. This will revert your database ‘back to good’ if you will.
  4. At this point, you can delete your migration that is in between the other two as well as the latest migration that had the duplicate migration code.
  5. Issue Command ‘Add-Migration MyNewMigration’
  6. This will create a new migration that
  7. Issue Command ‘Update-Database

This should bring the database base back to good and inline with the migrations files, as well as creating a new snapshot of the database that matches the current code first model. Note that this option only works if the latest migration exists in your local workspace only. It cannot have been committed to source control.

Whew.

Best Practices For Teams Using Migrations

  1. Turn off Automatic Migrations – This isn’t just for teams, this is for pretty much anyone using migrations. Automatic migrations takes all control away from the developer on how the database is migrated. You should be in control of the migration files to avoid, ‘It just deleted all my data scenarios’ (Yes, it will do this. Be extra sure you have automatic migration off for anything in Production)
  2. Get latest often – Always having the latest helps to reduce the number of times this scenario occurs.
  3. Check in often – On the contrary, be a good team member by getting new migration checked in as soon as possible.
  4. Designate a migration master – This certainly isn’t going to work for all teams, and it comes with some risk that the migration master becomes a bottle neck for the whole team. But basically, the other developers make changes to the context model, but the actual migration files are always created by one person, that person is always guaranteed to have the correct snapshot of the model if they are the only one creating migration files. This is the only option to completely eliminate the ‘Oww, My Toes’ scenario.

Understanding the Problem is the Hardest Part of Any Software Project

confusion-311388_640It doesn’t matter what piece of software you’re building, the key to success lies in having a deep understanding  of the problem you are aiming to solve.  It’s so common to for a project to get well underway before the problem domain is fully understood. It’s a natural thing to happen.

When a client comes to you with an idea for software, we usually begin by talking about their idea in terms of our technical minds. When they’re explaining whatever requirements they’ve put together, our wheels start spinning with how we’re going to architect our solution, or what how we’re going to design this feature or that. Before we know it, we fire up our IDE of choice and start hammering out our solution.

Before long, we begin to run into scenarios that we’re not sure of what the right answer is. At this point, we are left with a choice: get the answers to the questions we have to better understand the problem, or make assumptions about what is probably the right answer, and keep being productive. The misunderstanding is that breaking stride to get a better understanding of the problem is not productive. When we do this, we’ve already set ourselves up to fail. It doesn’t necessarily mean you are going to fail, but there will be pain. As we jump to implementation, we are giving up great opportunities to dive deep into the problem domain. By taking the time to dive into the problem domain, not only are we still productive, but we’re actually more productive in the long run as we will inevitably be more accurate in our solution, requiring much less rework.

So how do we dive deep into understanding the problem domain? The short answer is to ask questions. But it’s not just asking questions, it’s asking the right questions.

The end user is your greatest asset. The key to understanding the problem is to understand how the end user works and how the product fits into how they work. Get the end user talking and keep them talking by asking “What else?” and “Tell me more about _____”.

Understand the business domain. Your technical domain should match your user’s business domain. When you begin introducing technical terms to a user’s vocabulary, you are attempting to change the way they work. Instead you should be aiming to avoid technical terms that have no place in the business domain.

Understanding the problem domain is both hardest and most important aspect to developing any piece of software. I like to never assume that I know enough about the business domain to make assumptions about it. It’s always a better use of time to get the answers to questions as they come up than making assumptions and continuing down the wrong path.

SlowCheetah for Configuration Transforms for Windows Apps

UPDATE:  The creator of SlowCheetah has announced that he will no longer be supporting SlowCheetah. His intent was for Microsoft to implement this solution into Visual Studio out of the box. Unfortunately, that has not yet happened.  If you would like to continue to have this functionality in future version of Visual Studio, please vote for the feature here: http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2043217-support-web-config-style-transforms-on-any-file-in.


 

I was talking with a co-worker few days ago, and he mentioned that he was using build events and some x-copy to replace his app.config files when building his projects.  I asked him why he wasn’t just using slow cheetah, and he looked at me like I was from Mars.  Apparently not everyone knows that there is a tool out there that makes app.config files as easy to transform as web.config files.  Enter SlowCheetah.  Just install this VSIX, and you’re ready to go.

I will go ahead and walk you through a simple implementation of SlowCheetah on a console application.  The below assumes that you have already downloaded and installed the SlowCheetah VSIX from here.

Let’s assume that I have a new console application.  By default, there will be two configuration in configuration manager: Debug and Release.  Let’s assume that I want to add a third called ‘QA’.

Add a New Configuration

Open configuration manager by right clicking your solution in Solution Explorer and selecting ‘Configuration Manager’.  In the Active Solution Configuration drop down list, select ‘New’. Name it QA, and make sure the ‘Create new project configurations’ check box is checked. Click OK.

1_Configuration Manger

 

Install SlowCheeta va NuGet

By accessing the NuGet Package manager, search for SlowCheetah and Install.

2_Nuget

 

 

Create an application setting in App.Config

Here we will create an app setting that we can use to demonstrate the transform ability that SlowCheetah gives us.

3_AppConfig

 

 

Create a new Transform

By right clicking your app.config file, and selecting ‘Add Transform’.  A new configuration file is created as a child you for you app.config.  If you are familiar with the web.config transform ability of Visual Studio, this should look familiar to you.

4_Add Transform

**Note that App.QA.config has been created.

5_New Transforms

 

 

Add Configuration Value for QA.App.config

Now create an app setting in your new config file.  This is the value that will be used during our transform.

6_QA Settings

 

 

Preview the New Transform

If you are using Visual Studio 2013, the ability to preview your transforms is built in.  Just right click your transform file and select ‘Preview Transform’ and you will be able to see your transform changes.  If you are using a version of Visual Studio prior to 2013, you will actually need to run the transform.

7_Preview

8_Preview Result

 

Deployment

If you are doing some sort of automatic deployment, you will need to take some extra steps to make sure that the transforms will run on your build servers.  The creator of SlowCheetah has put together a good step by step on how to get that going here.

Conclusion

Configuration transforms are a powerful tool when it comes to developing applications for different environments. We have been able to do this in web applications easily for quite some time in Visual Studio; however, until SlowCheetah, there was no way to transform config files in Windows Apps.

Migrating a Project from Database First to Code First

Overview

So you just pushed you application to production and used Microsoft’s new shiny ORM. It’s 2008 and you’re on the bleeding edge of .NET technology by implementing Entity Framework. Your EDMX paired with your database project keeps your project nice and organized in source control. Great Job. But fast forward to today, and Entity Framework Code First is all the rage. What do you do with that aging database first design along with that EDMX in all it’s glory ? Nuke it. You don’t need it anymore.

I sure hope you didn’t just blindly nuke it and check in. We still do need that EDMX for a bit, but not for long. We’re going to walk through the process of converting your old busted to the new hotness of Entity Framework Code first.

Migrations are you friend, but not like the kind you leave alone home with your significant other. Be sure to use them, but I highly recommend turning off automatic migrations. Anything that has that much blind control of your app should be something that you should VERY carefully consider before turning on.

Note: This process assumes you are using the Database First approach, and not the Model First Approach. If you use the model-first approach, you will have some leg work to do in order to determine what your EDMX might be doing that cannot be reverse engineered from the database.

Disclaimer: This is a fairly significant change you will be making to your project, so make sure that you plan for the regression testing of everything.

Now, let’s get on with it:

Step 1 : Generate your Context, Entities, and Mapping Files

Microsoft has released a visual studio plugin () that will generate POCOs and a context based on an EDMX. This will save you a whole lot of time. Head on over here, and install this plugin.

Once installed, move over to you project, and right click your Project File. There should now be a context menu item, Entity Framework, Select That, and then Reverse Engineer Code First.

generate_views

Select the database you would like to use to base the reverse engineer process to be based on.

generate_diag1

Once you click OK, a folder will be created in your project called Models that contains your new Context, Entities, and Fluent mapping configurations.

generated_files

Step 2: Remove old context, and update the project to use the new context.

Now that you have created all of your new entities and context, the old one can be removed. Delete the EDMX and all associated files (context.tt, etc).

Step 3: Enable Migrations

As I mentioned before, we are NOT enabling automatic migrations. We are only enabling migrations. This means that we will manually create migrations by using the add-migration syntax in the Package Manager Console.

In the Package Manager Console, Make sure that you set the Default Project to the project that contains your context. Then enter the command Enable-Migrations

enable_migrations

You will notice that a Migrations folder has been created with a Configuration.cs file.  In the Configuration.cs file, make sure Automatic Migrations is set to false.

Step 4: Create and Set Database Initializer

Create a new class called MyDbInitialzier

using System.Data.Entity;
using MyProject.Data.DataAccess.Migrations;

namespace MyProject.Data.DataAccess.EntityFramework
{
    internal sealed class MyDbInitializer : MigrateDatabaseToLatestVersion<MyDbContext, Configuration>
    {
    }
}

You will notice that I the initializer calss inherits from the MigrateDatabaseToLatestVersion class.  It is likely that this is the Initializer behavior that you will want to use if you have an existing database already in production.  If you have special circumstances, be sure to review all of the default initializers and/or look into building a custom initializer.

Step 5: Implement the new Context

You will want to crack open you web config and replace the old connection string (The one with all of the metadata stuff, with a new connection string.  The new connection string should look like any old ADO.NET connection string.

You will now want to replace the references to the old context with the new one. (Shortcut: you could just rename the new one to match the old one’s name).

Note: You may encounter a bit of a gotcha here.  Since the new context is of type DbContext and the old one was of type ObjectContext, you may find that some the compiler is complaining about some things.  The DbContext is kind of a wrapper for the object context that is meant to be lighter weight, there are things you may be using that are not supported by the db context.  You will want to research any of these issues that come up to see if the DbContext can support them. If all else fails, the DbContext can be cast to the ObjectContext if you absolutely need it.  (This will result in a performance hit, so use wisely).  The syntax for getting the ObjectContext from a DbContext is:

public class MyContext: DbContext
{
    public ObjectContext ObjectContext()
    {
        return (this as IObjectContextAdapter).ObjectContext;
    }
}
Step 6: Create Your Initial Migration

If we tried to run the project right now, the application would encounter an error letting you know that there are pending changes that need to be included in a migration before the application can proceed.  We are going to create our initial migration. In the Package Manager Console, enter the command Add-Migration initial

add_migration_initial

In your Migrations folder, a file should have been created: YYYYYYDDHHMMSSS_initial.cs. This should be a total representation of your entire existing database.

EF keeps track of changes to the data model updating a table in your database called __MigrationHistory (in SystemTables) Since your database is existing already, you do not have this table in your database, so when this migration goes to run, it will attempt to re-create all of the objects in your database. This is bad, and we dont want that.  We can use this trick to tell EF to not re-create all of the objects when it attempts to run this migration.  In your initial migration class, comment out all of the code in the Up method.  That’s it, that’s the whole trick.

public partial class initial : DbMigration
 {
    public override void Up()
    {
        // Commented Code Here
    }
}
Step 7: Update the Database

Now is the time to update your database.  In the package manager console, enter the command, Update-Database.

udpate-database

This will create the __MigrationHistory table and will record that it ran this initial migration, so moving forward it will view your database as ‘up to date’ with your data model. (If you want to create the database from scratch using code first from now on, you will need to uncomment this migration.  It can safely be uncommented after it updates the existing database).

That’s it.  You should now be able to run your project. Now you need to regression test everything really well.

Conclusion

By following these steps you should now be fully running on Code First with Migrations.  Happy Coding!

OpsHub and Polaris Solutions Announce Partnership to Drive Collaboration and Efficiency of Teams in the Software Development Lifecycle

OpsHub and Polaris Solutions announce partnership offering software development organizations a comprehensive Team Foundation Server (TFS) based solution for quick and easy adoption of Team Foundation Server (TFS), driving collaboration and efficiency of cross-functional teams in the software development lifecycle.

Palo Alto, CA, Chicago, IL and St. Louis, MO— OpsHub and Polaris Solutions Announce Partnership to Drive Efficiency of Teams in the Software Development Lifecycle—OpsHub and Polaris Solutions are pleased to announce a partnership enabling easy adoption of Team Foundation Server (TFS) to drive collaboration and efficiency of teams in the software development lifecycle. With this partnership, OpsHub and Polaris Solutions can offer the industry leading Application Lifecycle Management (ALM) integration and migration platform with Team Foundation Server (TFS) expertise and best-in-class ALM implementation and consulting services.

OpsHub and Polaris Solutions are both Microsoft Visual Studio 2013 Launch Partners and have strong ties in the Microsoft partner and customer ecosystem.

OpsHub Integration Manager’s broad ALM support addresses the challenges faced by larger corporations looking at Team Foundation Server (TFS) adoption within their heterogeneous ALM environment. Polaris Solutions has best-in-class ALM implementation and consulting services with expertise in Team Foundation Server (TFS) adoption for the most challenging environments.

“The OpsHub and Polaris Solutions partnership, provides great value to our customers, by helping them quickly adopt Team Foundation Server (TFS) within their heterogeneous ALM environments. Our combination of powerful integration tools and deep implementation experience will help drive efficiencies and collaboration between cross-functional teams working on multiple projects in the software development lifecycle.” said Chris Kadel, Principal at Polaris Solutions.

“Our customers will greatly benefit from the deep TFS expertise and best-in-class ALM implementation and consulting services of Polaris Solutions,” said Sandeep Jain, President and CEO of OpsHub, “and will be able to quickly adopt to Team Foundation Server (TFS) using OpsHub’s integration and migration platform, reaping the rewards of Visual Studio that much faster.”

OpsHub delivers their on-premise and cloud-based solutions to enterprises around the globe.

About Polaris Solutions

Polaris Solutions is an Application Lifecycle Management (ALM) consulting firm with offices in Chicago, St. Louis, and Denver. Polaris Solutions specializes in helping teams deliver high value software through technical leadership, process improvement, and software development expertise. Polaris Solutions provides industry proven expertise in software delivery and deep technical knowledge to its clients. It offers fresh insights and new directions to help companies take their next step forward with today’s powerful technologies.

http://www.polarissolutions.com.

For more information: info@polarissolutions.com

About OpsHub

OpsHub is the leading provider of Application Lifecycle Management (ALM) integration and migration solutions for application development organizations. OpsHub creates a unified ALM ecosystem by seamlessly combining individual ALM systems, enabling agility at scale.

The OpsHub solution provides the most comprehensive out-of-the-box integration and migration solution within the ALM ecosystem. Its span across ALM functionality, includes requirements management, source control, bug tracking, test management, release management, and customer support.

The OpsHub solution enables quick migration and seamless integration between leading ALM systems including those from HP, Microsoft, IBM, Accept, Atlassian, Rally, Serena, and more. OpsHub delivers their on-premise and cloud-based solutions to enterprises around the globe. For more information, visit http://www.opshub.com.

For more information, press only:

Jyoti Jain

marketing@opshub.com

 

For more information on OpsHub support for Microsoft Visual Studio 2013, visit:

http://www.opshub.com/tfsinfo

Polaris is a Platinum Sponsorship for Chicago Code Camp 2014

Polaris has just signed on as a Platinum Sponsor for this year’s Chicago Code Camp at the College of Lake County campus on April 26th, 2014.

Chicago Code Camp is a free, community-driven developer conference. Over 200 attendees and 17 sponsors filled the campus of College of Lake County last year and this year is shaping up to be even better.

This amazing event is completely free, so head on over and register now!

Polaris Solutions is a Gold Sponsor for Agile Gravy STL 2014

We’re proud to announce that we just signed on as a Gold sponsor for the first annual Agile Gravy St. Louis conference – a rich & savory 1-day Agile conference experience that’s worth soaking up every last drop. Early bird registration is $99.

The event will be held on Thursday, April 10th at the St. Louis Marriott West. Drop by our booth and say HI!

I’m not a phony, and neither are you

Awhile back I read a blog post titled, “I’m a Phony, Are you?” by Scott Hanselman. If you’re reading this blog, you are probably well aware of who Scott Hanselman is. But, for those who are unfamiliar, Scott is basically the definition of Rock Star Programmer. I really respect his opinions, and try to follow his advice when he gives it.  But he’s a phony!?

After reading his post, something felt a little… off about it. How could someone who has achieved celebrity status in the IT world describe himself as a fake? What does that say about me? or you? I’ll tell you, my initial reaction to this was: “Well then, I should just quit now.” If someone who I view as so totally better than me is a phony, then I’m clearly not cut out for this.

Now, when we dive a little deeper into his message. He’s not calling himself a phony in general, he explains that he gets into situations that put him in over his head. The feeling of I have no idea what I’m doing here, but I’m doing it anyway is what makes us feel dirty. We are the experts here, were getting paid to know this.  I’m taking their money and I’m just ‘winging it’?  That makes me a phony.

Ok, so I can see where he’s coming from here and I can see how this makes us feel bad. It seems like people in the IT industry get down on themselves when they realize that they don’t know EVERYTHING about something. Scott, explains how this feeling helps motivate him and I think this quote sums up what I think is wrong with this line of thinking.

I use insecurity as a motivator to achieve and continue teaching.

If that works for him, great, but I don’t think that feelings of inadequacy should be a motivator for anyone on a regular basis. One thing that people in our industry should understand is that they aren’t going to know everything about everything, it’s just not going to happen. More importantly, we’re not hired because we know everything about everything. We’re hired to do a job, and our ability to deliver on that is what makes us what we are. Do the people you work with or hire you care that you googled how to do something? No, they care that you delivered what you said you would when you said you would. Your ability to adapt to new situations and apply new concepts is what makes you valuable.

‘I don’t know’ is an acceptable answer

I think the root of this entire problem boils down to the fact that people are afraid to say “I don’t know”. It’s OK to not know something. It’s what you do after recognizing that you don’t know something that makes the difference. How quickly can you move from’ I don’t know’ to ‘I do know’?

Continuous learning and adaptability are the most important things when it comes to building software. Our industry, tools, methodologies, problems, and goals change constantly. Adapting to these challenging and ever changing demands is where we should be building confidence in ourselves, not making ourselves feel inadequate because we aren’t already familiar with the new conditions.

I realize that Scott’s being a phony claim is probably supposed to be Tongue in Cheek and a bit embellished, but I do think that people in our Industry do suffer from Impostor Syndrome and they shouldn’t. Scott is not a phony. I’m not a phony.  Neither are you. If you set clear expectations and meet them, you are doing what you’re supposed to.

Polaris is a Platinum Sponsor of St. Louis Days of .NET

Clint Polaris Booth STLDODN

For all of our friends in the St. Louis, MO area.  St. Louis Days of .NET is now under way. Polaris is proud to be a platinum sponsor for the event and will be hosting a booth, which from the image above, you can see Clint working furiously to set up.

If you’re wandering around STLDODN and looking for some great talks, check out some of the Polaris folks in action:

Chris Kadel will be participating in the TFS pre-compiler on Thursday Nov 14th from 8:30am to 5pm: http://www.stldodn.com/2013/pre-compilers.  It is a FULL-DAY hands-on workshop and it’s only $75 to attend, so sign up fast. You can’t get training like this for such an amazing price anywhere else that I know of.

A Pragmatic Intro to Unit Testing by our very own Josh Gillespie

Advanced OOP by our newest team member and former Softie Clint Edmonson

Agile Testing in a Waterfall World by Angela Dugan

Application Architecture Jumpstart also from Clint

Dude I Just Stepped into Your Code from Josh